(Roughly) Daily

Posts Tagged ‘artificial intelligence

“O brave new world, that has such people in ‘t!”*…

The estimable Steven Johnson suggests that the creation of Disney’s masterpiece, Snow White, gives us a preview of what may be coming with AI algorithms sophisticated enough to pass for sentient beings…

… You can make the argument that the single most dramatic acceleration point in the history of illusion occurred between the years of 1928 and 1937, the years between the release of Steamboat Willie [here], Disney’s breakthrough sound cartoon introducing Mickey Mouse, and the completion of his masterpiece, Snow White, the first long-form animated film in history [here— actually the first full-length animated feature produced in the U.S; the first produced anywhere in color]. It is hard to think of another stretch where the formal possibilities of an artistic medium expanded in such a dramatic fashion, in such a short amount of time.

[There follows an fascinating history of the Disney Studios technical innovations that made Snow White possible, and an account of the film;’s remarkable premiere…]

In just nine years, Disney and his team had transformed a quaint illusion—the dancing mouse is whistling!—into an expressive form so vivid and realistic that it could bring people to tears. Disney and his team had created the ultimate illusion: fictional characters created by hand, etched onto celluloid, and projected at twenty-four frames per second, that were somehow so believably human that it was almost impossible not to feel empathy for them.

Those weeping spectators at the Snow White premiere signaled a fundamental change in the relationship between human beings and the illusions concocted to amuse them. Complexity theorists have a term for this kind of change in physical systems: phase transitions. Alter one property of a system—lowering the temperature of a cloud of steam, for instance—and for a while the changes are linear: the steam gets steadily cooler. But then, at a certain threshold point, a fundamental shift happens: below 212 degrees Fahrenheit, the gas becomes liquid water. That moment marks the phase transition: not just cooler steam, but something altogether different.

It is possible—maybe even likely—that a further twist awaits us. When Charles Babbage encountered an automaton of a ballerina as a child in the early 1800s, the “irresistible eyes” of the mechanism convinced him that there was something lifelike in the machine.  Those robotic facial expressions would seem laughable to a modern viewer, but animatronics has made a great deal of progress since then. There may well be a comparable threshold in simulated emotion—via robotics or digital animation, or even the text chat of an AI like LaMDA—that makes it near impossible for humans not to form emotional bonds with a simulated being. We knew the dwarfs in Snow White were not real, but we couldn’t keep ourselves from weeping for their lost princess in sympathy with them. Imagine a world populated by machines or digital simulations that fill our lives with comparable illusion, only this time the virtual beings are not following a storyboard sketched out in Disney’s studios, but instead responding to the twists and turns and unmet emotional needs of our own lives. (The brilliant Spike Jonze film Her imagined this scenario using only a voice.) There is likely to be the equivalent of a Turing Test for artificial emotional intelligence: a machine real enough to elicit an emotional attachment. It may well be that the first simulated intelligence to trigger that connection will be some kind of voice-only assistant, a descendant of software like Alexa or Siri—only these assistants will have such fluid conversational skills and growing knowledge of our own individual needs and habits that we will find ourselves compelled to think of them as more than machines, just as we were compelled to think of those first movie stars as more than just flickering lights on a fabric screen. Once we pass that threshold, a bizarre new world may open up, a world where our lives are accompanied by simulated friends…

Are we in for a phase-shift in our understanding of companionship? “Natural Magic,” from @stevenbjohnson, adapted from his book Wonderland: How Play Made The Modern World.

And for a different, but aposite perspective, from the ever-illuminating L. M. Sacasas (@LMSacasas), see “LaMDA, Lemoine, and the Allures of Digital Re-enchantment.”

* Shakespeare, The Tempest

###

As we rethink relationships, we might recall that it was on this date in 2007 that the original iPhone was introduced. Generally downplayed by traditional technology pundits after its announcement six months earlier, the iPhone was greeted by long lines of buyers around the country on that first day. Quickly becoming a phenomenon, one million iPhones were sold in only 74 days. Since those early days, the ensuing iPhone models have continued to set sales records and have radically changed not only the smartphone and technology industries, but the world in which they operate as well.

The original iPhone

source

“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman

###

As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.”*…

Claude Shannon with his creation, Theseus the maze-solving mouse, an early illustration of machine learning and a follow-on project to the work described below

Readers will know of your correspondent’s fascination with the remarkable Claude Shannon (see here and here), remembered as “the father of information theory,” but seminally involved in so much more. In a recent piece in IEEE Spectrum, the redoubtable Rodney Brooks argues that we should add another credit to Shannon’s list…

Among the great engineers of the 20th century, who contributed the most to our 21st-century technologies? I say: Claude Shannon.

Shannon is best known for establishing the field of information theory. In a 1948 paper, one of the greatest in the history of engineering, he came up with a way of measuring the information content of a signal and calculating the maximum rate at which information could be reliably transmitted over any sort of communication channel. The article, titled “A Mathematical Theory of Communication,” describes the basis for all modern communications, including the wireless Internet on your smartphone and even an analog voice signal on a twisted-pair telephone landline. In 1966, the IEEE gave him its highest award, the Medal of Honor, for that work.

If information theory had been Shannon’s only accomplishment, it would have been enough to secure his place in the pantheon. But he did a lot more…

In 1950 Shannon published an article in Scientific American and also a research paper describing how to program a computer to play chess. He went into detail on how to design a program for an actual computer…

Shannon did all this at a time when there were fewer than 10 computers in the world. And they were all being used for numerical calculations. He began his research paper by speculating on all sorts of things that computers might be programmed to do beyond numerical calculations, including designing relay and switching circuits, designing electronic filters for communications, translating between human languages, and making logical deductions. Computers do all these things today…

The “father of information theory” also paved the way for AI: “How Claude Shannon Helped Kick-start Machine Learning,” from @rodneyabrooks in @IEEESpectrum.

* Claude Shannon (who may or may not have been kidding…)

###

As we ponder possibility, we might send uncertain birthday greetings to Werner Karl Heisenberg; he was born on this date in 1901.  A theoretical physicist, he made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, superconductivity, cosmic rays, and subatomic particles.  But he is most widely remembered as a pioneer of quantum mechanics and author of what’s become known as the Heisenberg Uncertainty Principle.  Heisenberg was awarded the Nobel Prize in Physics for 1932 “for the creation of quantum mechanics.”

During World War II, Heisenberg was part of the team attempting to create an atomic bomb for Germany– for which he was arrested and detained by the Allies at the end of the conflict.  He was returned to Germany, where he became director of the Kaiser Wilhelm Institute for Physics, which soon thereafter was renamed the Max Planck Institute for Physics. He later served as president of the German Research Council, chairman of the Commission for Atomic Physics, chairman of the Nuclear Physics Working Group, and president of the Alexander von Humboldt Foundation.

Some things are so serious that one can only joke about them

Werner Heisenberg

source

“In the attempt to make scientific discoveries, every problem is an opportunity and the more difficult the problem, the greater will be the importance of its solution”*…

(Roughly) Daily is headed into its traditional Holiday hibernation; regular service will begin again very early in the New Year.

It seems appropriate (especially given the travails of this past year) to end the year on a positive and optimistic note, with a post celebrating an extraordinary accomplishment– Science magazine‘s (thus, the AAAS‘) “Breakthrough of the Year” for 2021…

In his 1972 Nobel Prize acceptance speech, American biochemist Christian Anfinsen laid out a vision: One day it would be possible, he said, to predict the 3D structure of any protein merely from its sequence of amino acid building blocks. With hundreds of thousands of proteins in the human body alone, such an advance would have vast applications, offering insights into basic biology and revealing promising new drug targets. Now, after nearly 50 years, researchers have shown that artificial intelligence (AI)-driven software can churn out accurate protein structures by the thousands—an advance that realizes Anfinsen’s dream and is Science’s 2021 Breakthrough of the Year.

AI-powered predictions show proteins finding their shapes: the full story: “Protein structures for all.”

And read Nature‘s profile of the scientist behind the breakthrough: “John Jumper: Protein predictor.”

* E. O. Wilson

###

As we celebrate science, we might send well-connected birthday greetings to Robert Elliot Kahn; he was born on this date in 1938. An electrical engineer and computer scientist, he and his co-creator, Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet. Later, he and Vint, along with fellow computer scientists Lawrence Roberts, Paul Baran, and Leonard Kleinrock, built the ARPANET, the first network to successfully link computers around the country.

Kahn has won the Turing Award, the National Medal of Technology, and the Presidential Medal Of Freedom, among many, many other awards and honors.

source

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

Humor is said to be the quintessential humor capacity, last thing that AI could– will?– conquer…

New Yorker cartoons are inextricably woven into the fabric of American visual culture. With an instantly recognizable formula — usually, a black-and-white drawing of an imagined scenario followed by a quippy caption in sleek Caslon Pro Italic — the daily gags are delightful satires of our shared human experience, riffing on everything from cats and produce shopping to climate change and the COVID-19 pandemic. The New Yorker‘s famous Cartoon Caption Contest, which asks readers to submit their wittiest one-liners, gets an average 5,732 entries each week, and the magazine receives thousands of drawings every month from hopeful artists.

What if a computer tried its hand at the iconic comics?

Playing on their ubiquity and familiarity, comics artist Ilan Manouach and AI engineer Ioannis [or Yiannis] Siglidis developed the Neural Yorker, an artificial intelligence (AI) engine that posts computer-generated cartoons on Twitter. The project consists of image-and-caption combinations produced by a generative adversarial network (GAN), a deep-learning-based model. The network is trained using a database of punchlines and images of cartoons found online and then “learns” to create new gags in the New Yorker‘s iconic style, with hilarious (and sometimes unsettling) results…

Comics artist Ilan Manouach (@IlanManouach) and AI engineer Yiannis Siglidis created The Neural Yorker: “Computer-Generated New Yorker Cartoons Are Delightfully Weird.”

For comparison’s sake, see “142 Of The Funniest New Yorker Cartoons Ever.”

Alan Kay

###

As we go for the guffaw, we might recall that it was on this date in 1922 that the first chapter in Walt Disney’s career as an animator came to a close when he released the 7th and next-to-last “Laugh-O-Gram” cartoon adaption of a fairy tale, “Jack the Giant Killer.”

Disney’s first animated films began in 1920 as after-work projects when Disney was a commercial artist for an advertising company in Kansas City. He made these cartoons by himself and with the help of a few friends.

He started by persuading Frank Newman, Kansas City’s leading exhibitor, to include short snippets of animation in the series of weekly newsreels Newman produced for his chain of three theaters. Tactfully called “Newman Laugh-O-grams,” Disney’s footage was meant to mix advertising with topical humor…

The Laugh-O-grams were a hit, leading to commissions for animated intermission fillers and coming attractions slides for Newman’s theaters. Spurred by his success, the 19-year-old Disney decided to try something more ambitious: animated fairy tales. Influenced by New York animator Paul Terry’s spoofs of Aesop’s Fables, which had premiered in June 1920, Disney decided not only to parody fairy-tale classics but also to modernize them by having them playing off recent events. With the help of high school student Rudy Ising, who later co-founded the Warner Brothers and MGM cartoon studios, and other local would-be cartoonists, Disney [made 7 animated shorts, of which “Jack, the Giant Killer” was the penultimate].

Walt Disney’s Laugh-O-grams