(Roughly) Daily

Posts Tagged ‘neural network

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing

###

As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame

source

“To sleep: perchance to dream: ay, there’s the rub”*…

I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”

That’s overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.

Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…

Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.

(Image above: Gontzal García del CañoCC BY-NC-SA, modified)

* Shakespeare, Hamlet

###

As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.

source

“Electricity is really just organized lightning”*…

A diagram from Galvani’s De viribus electricitatis in motu musculari commentarius, 1791.

In Mary Shelley’s Frankenstein, written in 1818, the young Victor Frankenstein becomes obsessed with the idea that electricity is a kind of fluid that endows living things with their life force. This obsession leads to tragedy.

Shelley’s view of electricity was, in fact, not an uncommon perspective at the time: just a few decades earlier the Italian scientist Luigi Galvani had shown that a shock of static electricity applied to the legs of a dismembered frog would cause the legs to kick. Galvani concluded that there existed a kind of “animal electric fluid” that was responsible for the animation of living creatures.

In the two hundred years since Frankenstein our view of electricity has certainly evolved, as has our ability to generate and control electric currents. But do we really understand what we’re doing? Do we even know what electricity is?

Physicist Brian Skinner (@gravity_levity) explains “Here’s why we don’t understand what electricity is.”

Pair with “Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science.”

* George Carlin

###

As we plug in, we might send really fast birthday greetings to Leon Cooper; he was born on this date in 1930. A physicist, he shared the Nobel Prize in 1972 (with John Bardeen and John Robert Schrieffer) for contributing the concept of Cooper electron pairs which forms the basis of the BCS (their initials) theory of superconductivity. He is also one of the the namesakes and co-developers of the BCM theory of synaptic plasticity.

He went on to become a cofounder and co-chairman of Nestor, Inc., a company that applies neural-network systems to complex applications. The company built computer-based adaptive pattern-recognition and risk-assessment systems that could, for example, accurately classify complex patterns in sonar, radar or imaging systems. He also founded and was director of Brown University’s Institute for Brain and Neural Systems, which develops cognitive pharmaceuticals and intelligent systems for electronics, automobiles and communications.

The character “Sheldon Cooper” in Big Bang Theory is partially named for Cooper.

source