(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman

###

As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.”*…

Claude Shannon with his creation, Theseus the maze-solving mouse, an early illustration of machine learning and a follow-on project to the work described below

Readers will know of your correspondent’s fascination with the remarkable Claude Shannon (see here and here), remembered as “the father of information theory,” but seminally involved in so much more. In a recent piece in IEEE Spectrum, the redoubtable Rodney Brooks argues that we should add another credit to Shannon’s list…

Among the great engineers of the 20th century, who contributed the most to our 21st-century technologies? I say: Claude Shannon.

Shannon is best known for establishing the field of information theory. In a 1948 paper, one of the greatest in the history of engineering, he came up with a way of measuring the information content of a signal and calculating the maximum rate at which information could be reliably transmitted over any sort of communication channel. The article, titled “A Mathematical Theory of Communication,” describes the basis for all modern communications, including the wireless Internet on your smartphone and even an analog voice signal on a twisted-pair telephone landline. In 1966, the IEEE gave him its highest award, the Medal of Honor, for that work.

If information theory had been Shannon’s only accomplishment, it would have been enough to secure his place in the pantheon. But he did a lot more…

In 1950 Shannon published an article in Scientific American and also a research paper describing how to program a computer to play chess. He went into detail on how to design a program for an actual computer…

Shannon did all this at a time when there were fewer than 10 computers in the world. And they were all being used for numerical calculations. He began his research paper by speculating on all sorts of things that computers might be programmed to do beyond numerical calculations, including designing relay and switching circuits, designing electronic filters for communications, translating between human languages, and making logical deductions. Computers do all these things today…

The “father of information theory” also paved the way for AI: “How Claude Shannon Helped Kick-start Machine Learning,” from @rodneyabrooks in @IEEESpectrum.

* Claude Shannon (who may or may not have been kidding…)

###

As we ponder possibility, we might send uncertain birthday greetings to Werner Karl Heisenberg; he was born on this date in 1901.  A theoretical physicist, he made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, superconductivity, cosmic rays, and subatomic particles.  But he is most widely remembered as a pioneer of quantum mechanics and author of what’s become known as the Heisenberg Uncertainty Principle.  Heisenberg was awarded the Nobel Prize in Physics for 1932 “for the creation of quantum mechanics.”

During World War II, Heisenberg was part of the team attempting to create an atomic bomb for Germany– for which he was arrested and detained by the Allies at the end of the conflict.  He was returned to Germany, where he became director of the Kaiser Wilhelm Institute for Physics, which soon thereafter was renamed the Max Planck Institute for Physics. He later served as president of the German Research Council, chairman of the Commission for Atomic Physics, chairman of the Nuclear Physics Working Group, and president of the Alexander von Humboldt Foundation.

Some things are so serious that one can only joke about them

Werner Heisenberg

source

“In the attempt to make scientific discoveries, every problem is an opportunity and the more difficult the problem, the greater will be the importance of its solution”*…

(Roughly) Daily is headed into its traditional Holiday hibernation; regular service will begin again very early in the New Year.

It seems appropriate (especially given the travails of this past year) to end the year on a positive and optimistic note, with a post celebrating an extraordinary accomplishment– Science magazine‘s (thus, the AAAS‘) “Breakthrough of the Year” for 2021…

In his 1972 Nobel Prize acceptance speech, American biochemist Christian Anfinsen laid out a vision: One day it would be possible, he said, to predict the 3D structure of any protein merely from its sequence of amino acid building blocks. With hundreds of thousands of proteins in the human body alone, such an advance would have vast applications, offering insights into basic biology and revealing promising new drug targets. Now, after nearly 50 years, researchers have shown that artificial intelligence (AI)-driven software can churn out accurate protein structures by the thousands—an advance that realizes Anfinsen’s dream and is Science’s 2021 Breakthrough of the Year.

AI-powered predictions show proteins finding their shapes: the full story: “Protein structures for all.”

And read Nature‘s profile of the scientist behind the breakthrough: “John Jumper: Protein predictor.”

* E. O. Wilson

###

As we celebrate science, we might send well-connected birthday greetings to Robert Elliot Kahn; he was born on this date in 1938. An electrical engineer and computer scientist, he and his co-creator, Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet. Later, he and Vint, along with fellow computer scientists Lawrence Roberts, Paul Baran, and Leonard Kleinrock, built the ARPANET, the first network to successfully link computers around the country.

Kahn has won the Turing Award, the National Medal of Technology, and the Presidential Medal Of Freedom, among many, many other awards and honors.

source

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

Humor is said to be the quintessential humor capacity, last thing that AI could– will?– conquer…

New Yorker cartoons are inextricably woven into the fabric of American visual culture. With an instantly recognizable formula — usually, a black-and-white drawing of an imagined scenario followed by a quippy caption in sleek Caslon Pro Italic — the daily gags are delightful satires of our shared human experience, riffing on everything from cats and produce shopping to climate change and the COVID-19 pandemic. The New Yorker‘s famous Cartoon Caption Contest, which asks readers to submit their wittiest one-liners, gets an average 5,732 entries each week, and the magazine receives thousands of drawings every month from hopeful artists.

What if a computer tried its hand at the iconic comics?

Playing on their ubiquity and familiarity, comics artist Ilan Manouach and AI engineer Ioannis [or Yiannis] Siglidis developed the Neural Yorker, an artificial intelligence (AI) engine that posts computer-generated cartoons on Twitter. The project consists of image-and-caption combinations produced by a generative adversarial network (GAN), a deep-learning-based model. The network is trained using a database of punchlines and images of cartoons found online and then “learns” to create new gags in the New Yorker‘s iconic style, with hilarious (and sometimes unsettling) results…

Comics artist Ilan Manouach (@IlanManouach) and AI engineer Yiannis Siglidis created The Neural Yorker: “Computer-Generated New Yorker Cartoons Are Delightfully Weird.”

For comparison’s sake, see “142 Of The Funniest New Yorker Cartoons Ever.”

Alan Kay

###

As we go for the guffaw, we might recall that it was on this date in 1922 that the first chapter in Walt Disney’s career as an animator came to a close when he released the 7th and next-to-last “Laugh-O-Gram” cartoon adaption of a fairy tale, “Jack the Giant Killer.”

Disney’s first animated films began in 1920 as after-work projects when Disney was a commercial artist for an advertising company in Kansas City. He made these cartoons by himself and with the help of a few friends.

He started by persuading Frank Newman, Kansas City’s leading exhibitor, to include short snippets of animation in the series of weekly newsreels Newman produced for his chain of three theaters. Tactfully called “Newman Laugh-O-grams,” Disney’s footage was meant to mix advertising with topical humor…

The Laugh-O-grams were a hit, leading to commissions for animated intermission fillers and coming attractions slides for Newman’s theaters. Spurred by his success, the 19-year-old Disney decided to try something more ambitious: animated fairy tales. Influenced by New York animator Paul Terry’s spoofs of Aesop’s Fables, which had premiered in June 1920, Disney decided not only to parody fairy-tale classics but also to modernize them by having them playing off recent events. With the help of high school student Rudy Ising, who later co-founded the Warner Brothers and MGM cartoon studios, and other local would-be cartoonists, Disney [made 7 animated shorts, of which “Jack, the Giant Killer” was the penultimate].

Walt Disney’s Laugh-O-grams

“Foresight begins when we accept that we are now creating a civilization of risk”*…

There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…

As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Agre’s papers are here.

* Jacques Ellul

###

As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.

Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.

After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.

source

%d bloggers like this: