(Roughly) Daily

Posts Tagged ‘machine learning

“These are the forgeries of jealousy”*…

Analysis of Leonardo da Vinci’s Salvator Mundi required dividing a high-resolution image of the complete painting into a set of overlapping square tiles. But only those tiles that contained sufficient visual information, such as the ones outlined here, were input to the author’s neural-network classifier.

Is it authentic? Attorney and AI practitioner Steven J. Frank, working with his wife, art historian and curator Andrea Frank (together, Art Eye-D Associates), brings machine learning to bear…

The sound must have been deafening—all those champagne corks popping at Christie’s, the British auction house, on 15 November 2017. A portrait of Jesus, known as Salvator Mundi (Latin for “savior of the world”), had just sold at Christie’s in New York for US $450.3 million, making it by far the most expensive painting ever to change hands.

But even as the gavel fell, a persistent chorus of doubters voiced skepticism. Was it really painted by Leonardo da Vinci, the towering Renaissance master, as a panel of experts had determined six years earlier? A little over 50 years before that, a Louisiana man had purchased the painting in London for a mere £45. And prior to the rediscovery of Salvator Mundi, no Leonardo painting had been uncovered since 1909.

Some of the doubting experts questioned the work’s provenance—the historical record of sales and transfers—and noted that the heavily damaged painting had undergone extensive restoration. Others saw the hand of one of Leonardo’s many protégés rather than the work of the master himself.

Is it possible to establish the authenticity of a work of art amid conflicting expert opinions and incomplete evidence? Scientific measurements can establish a painting’s age and reveal subsurface detail, but they can’t directly identify its creator. That reLeonardo da quires subtle judgments of style and technique, which, it might seem, only art experts could provide. In fact, this task is well suited to computer analysis, particularly by neural networks—computer algorithms that excel at examining patterns. Convolutional neural networks (CNNs), designed to analyze images, have been used to good advantage in a wide range of applications, including recognizing faces and helping to pilot self-driving cars. Why not also use them to authenticate art?

That’s what I asked my wife, Andrea M. Frank, a professional curator of art images, in 2018. Although I have spent most of my career working as an intellectual-property attorney, my addiction to online education had recently culminated in a graduate certificate in artificial intelligence from Columbia University. Andrea was contemplating retirement. So together we took on this new challenge…

With millions at stake, deep learning enters the art world. The fascinating story: “This AI Can Spot an Art Forgery,” @ArtAEye in @IEEESpectrum.

* Shakespeare (Titania, A Midsummer Night’s Dream, Act II, Scene 1)

###

As we honor authenticity, we might spare a thought for a champion of authenticity in a different sense, Joris Hoefnagel; he died on this date in 1601. A Flemish painter, printmaker, miniaturist, draftsman, and merchant, he is noted for his illustrations of natural history subjects, topographical views, illuminations (he was one of the last manuscript illuminators), and mythological works.

Hoefnagel made a major contribution to the development of topographical drawing. But perhaps more impactfully, his manuscript illuminations and ornamental designs played an important role in the emergence of floral still-life painting as an independent genre in northern Europe at the end of the 16th century. The almost scientific naturalism of his botanical and animal drawings served as a model for a later generation of Netherlandish artists.  Through these nature studies he also contributed to the development of natural history and he was thus a founder of proto-scientific inquiry.

Portrait of Joris Hoefnagel, engraving by Jan Sadeler, 1592 (source)

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing

###

As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame

source

“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman

###

As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“In the attempt to make scientific discoveries, every problem is an opportunity and the more difficult the problem, the greater will be the importance of its solution”*…

(Roughly) Daily is headed into its traditional Holiday hibernation; regular service will begin again very early in the New Year.

It seems appropriate (especially given the travails of this past year) to end the year on a positive and optimistic note, with a post celebrating an extraordinary accomplishment– Science magazine‘s (thus, the AAAS‘) “Breakthrough of the Year” for 2021…

In his 1972 Nobel Prize acceptance speech, American biochemist Christian Anfinsen laid out a vision: One day it would be possible, he said, to predict the 3D structure of any protein merely from its sequence of amino acid building blocks. With hundreds of thousands of proteins in the human body alone, such an advance would have vast applications, offering insights into basic biology and revealing promising new drug targets. Now, after nearly 50 years, researchers have shown that artificial intelligence (AI)-driven software can churn out accurate protein structures by the thousands—an advance that realizes Anfinsen’s dream and is Science’s 2021 Breakthrough of the Year.

AI-powered predictions show proteins finding their shapes: the full story: “Protein structures for all.”

And read Nature‘s profile of the scientist behind the breakthrough: “John Jumper: Protein predictor.”

* E. O. Wilson

###

As we celebrate science, we might send well-connected birthday greetings to Robert Elliot Kahn; he was born on this date in 1938. An electrical engineer and computer scientist, he and his co-creator, Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet. Later, he and Vint, along with fellow computer scientists Lawrence Roberts, Paul Baran, and Leonard Kleinrock, built the ARPANET, the first network to successfully link computers around the country.

Kahn has won the Turing Award, the National Medal of Technology, and the Presidential Medal Of Freedom, among many, many other awards and honors.

source

“Reality is frequently inaccurate”*…

Machine learning and what it may teach us about reality…

Our latest paradigmatic technology, machine learning, may be revealing the everyday world as more accidental than rule-governed. If so, it will be because machine learning gains its epistemological power from its freedom from the sort of generalisations that we humans can understand or apply.

The opacity of machine learning systems raises serious concerns about their trustworthiness and their tendency towards bias. But the brute fact that they work could be bringing us to a new understanding and experience of what the world is and our role in it…

The world is a black box full of extreme specificity: it might be predictable but that doesn’t mean it is understandable: “Learn from Machine Learning,” by David Weinberger (@dweinberger) in @aeonmag.

(image above: source)

* Douglas Adams, The Restaurant at the End of the Universe

###

As ruminate on the real, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

%d bloggers like this: