(Roughly) Daily

Posts Tagged ‘machine learning

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing


As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame


“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman


As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“In the attempt to make scientific discoveries, every problem is an opportunity and the more difficult the problem, the greater will be the importance of its solution”*…

(Roughly) Daily is headed into its traditional Holiday hibernation; regular service will begin again very early in the New Year.

It seems appropriate (especially given the travails of this past year) to end the year on a positive and optimistic note, with a post celebrating an extraordinary accomplishment– Science magazine‘s (thus, the AAAS‘) “Breakthrough of the Year” for 2021…

In his 1972 Nobel Prize acceptance speech, American biochemist Christian Anfinsen laid out a vision: One day it would be possible, he said, to predict the 3D structure of any protein merely from its sequence of amino acid building blocks. With hundreds of thousands of proteins in the human body alone, such an advance would have vast applications, offering insights into basic biology and revealing promising new drug targets. Now, after nearly 50 years, researchers have shown that artificial intelligence (AI)-driven software can churn out accurate protein structures by the thousands—an advance that realizes Anfinsen’s dream and is Science’s 2021 Breakthrough of the Year.

AI-powered predictions show proteins finding their shapes: the full story: “Protein structures for all.”

And read Nature‘s profile of the scientist behind the breakthrough: “John Jumper: Protein predictor.”

* E. O. Wilson


As we celebrate science, we might send well-connected birthday greetings to Robert Elliot Kahn; he was born on this date in 1938. An electrical engineer and computer scientist, he and his co-creator, Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet. Later, he and Vint, along with fellow computer scientists Lawrence Roberts, Paul Baran, and Leonard Kleinrock, built the ARPANET, the first network to successfully link computers around the country.

Kahn has won the Turing Award, the National Medal of Technology, and the Presidential Medal Of Freedom, among many, many other awards and honors.


“Reality is frequently inaccurate”*…

Machine learning and what it may teach us about reality…

Our latest paradigmatic technology, machine learning, may be revealing the everyday world as more accidental than rule-governed. If so, it will be because machine learning gains its epistemological power from its freedom from the sort of generalisations that we humans can understand or apply.

The opacity of machine learning systems raises serious concerns about their trustworthiness and their tendency towards bias. But the brute fact that they work could be bringing us to a new understanding and experience of what the world is and our role in it…

The world is a black box full of extreme specificity: it might be predictable but that doesn’t mean it is understandable: “Learn from Machine Learning,” by David Weinberger (@dweinberger) in @aeonmag.

(image above: source)

* Douglas Adams, The Restaurant at the End of the Universe


As ruminate on the real, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.


“To sleep: perchance to dream: ay, there’s the rub”*…

I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”

That’s overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.

Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…

Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.

(Image above: Gontzal García del CañoCC BY-NC-SA, modified)

* Shakespeare, Hamlet


As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.


%d bloggers like this: