(Roughly) Daily

Posts Tagged ‘machine learning

“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman


As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“In the attempt to make scientific discoveries, every problem is an opportunity and the more difficult the problem, the greater will be the importance of its solution”*…

(Roughly) Daily is headed into its traditional Holiday hibernation; regular service will begin again very early in the New Year.

It seems appropriate (especially given the travails of this past year) to end the year on a positive and optimistic note, with a post celebrating an extraordinary accomplishment– Science magazine‘s (thus, the AAAS‘) “Breakthrough of the Year” for 2021…

In his 1972 Nobel Prize acceptance speech, American biochemist Christian Anfinsen laid out a vision: One day it would be possible, he said, to predict the 3D structure of any protein merely from its sequence of amino acid building blocks. With hundreds of thousands of proteins in the human body alone, such an advance would have vast applications, offering insights into basic biology and revealing promising new drug targets. Now, after nearly 50 years, researchers have shown that artificial intelligence (AI)-driven software can churn out accurate protein structures by the thousands—an advance that realizes Anfinsen’s dream and is Science’s 2021 Breakthrough of the Year.

AI-powered predictions show proteins finding their shapes: the full story: “Protein structures for all.”

And read Nature‘s profile of the scientist behind the breakthrough: “John Jumper: Protein predictor.”

* E. O. Wilson


As we celebrate science, we might send well-connected birthday greetings to Robert Elliot Kahn; he was born on this date in 1938. An electrical engineer and computer scientist, he and his co-creator, Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet. Later, he and Vint, along with fellow computer scientists Lawrence Roberts, Paul Baran, and Leonard Kleinrock, built the ARPANET, the first network to successfully link computers around the country.

Kahn has won the Turing Award, the National Medal of Technology, and the Presidential Medal Of Freedom, among many, many other awards and honors.


“Reality is frequently inaccurate”*…

Machine learning and what it may teach us about reality…

Our latest paradigmatic technology, machine learning, may be revealing the everyday world as more accidental than rule-governed. If so, it will be because machine learning gains its epistemological power from its freedom from the sort of generalisations that we humans can understand or apply.

The opacity of machine learning systems raises serious concerns about their trustworthiness and their tendency towards bias. But the brute fact that they work could be bringing us to a new understanding and experience of what the world is and our role in it…

The world is a black box full of extreme specificity: it might be predictable but that doesn’t mean it is understandable: “Learn from Machine Learning,” by David Weinberger (@dweinberger) in @aeonmag.

(image above: source)

* Douglas Adams, The Restaurant at the End of the Universe


As ruminate on the real, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.


“To sleep: perchance to dream: ay, there’s the rub”*…

I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”

That’s overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.

Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…

Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.

(Image above: Gontzal García del CañoCC BY-NC-SA, modified)

* Shakespeare, Hamlet


As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.


“Facts alone, no matter how numerous or verifiable, do not automatically arrange themselves into an intelligible, or truthful, picture of the world. It is the task of the human mind to invent a theoretical framework to account for them.”*…

PPPL physicist Hong Qin in front of images of planetary orbits and computer code

… or maybe not. A couple of decades ago, your correspondent came across a short book that aimed to explain how we think know what we think know, Truth– a history and guide of the perplexed, by Felipe Fernández-Armesto (then, a professor of history at Oxford; now, at Notre Dame)…

According to Fernández-Armesto, people throughout history have sought to get at the truth in one or more of four basic ways. The first is through feeling. Truth is a tangible entity. The third-century B.C. Chinese sage Chuang Tzu stated, ”The universe is one.” Others described the universe as a unity of opposites. To the fifth-century B.C. Greek philosopher Heraclitus, the cosmos is a tension like that of the bow or the lyre. The notion of chaos comes along only later, together with uncomfortable concepts like infinity.

Then there is authoritarianism, ”the truth you are told.” Divinities can tell us what is wanted, if only we can discover how to hear them. The ancient Greeks believed that Apollo would speak through the mouth of an old peasant woman in a room filled with the smoke of bay leaves; traditionalist Azande in the Nilotic Sudan depend on the response of poisoned chickens. People consult sacred books, or watch for apparitions. Others look inside themselves, for truths that were imprinted in their minds before they were born or buried in their subconscious minds.

Reasoning is the third way Fernández-Armesto cites. Since knowledge attained by divination or introspection is subject to misinterpretation, eventually people return to the use of reason, which helped thinkers like Chuang Tzu and Heraclitus describe the universe. Logical analysis was used in China and Egypt long before it was discovered in Greece and in India. If the Greeks are mistakenly credited with the invention of rational thinking, it is because of the effective ways they wrote about it. Plato illustrated his dialogues with memorable myths and brilliant metaphors. Truth, as he saw it, could be discovered only by abstract reasoning, without reliance on sense perception or observation of outside phenomena. Rather, he sought to excavate it from the recesses of the mind. The word for truth in Greek, aletheia, means ”what is not forgotten.”

Plato’s pupil Aristotle developed the techniques of logical analysis that still enable us to get at the knowledge hidden within us. He examined propositions by stating possible contradictions and developed the syllogism, a method of proof based on stated premises. His methods of reasoning have influenced independent thinkers ever since. Logicians developed a system of notation, free from the associations of language, that comes close to being a kind of mathematics. The uses of pure reason have had a particular appeal to lovers of force, and have flourished in times of absolutism like the 17th and 18th centuries.

Finally, there is sense perception. Unlike his teacher, Plato, and many of Plato’s followers, Aristotle realized that pure logic had its limits. He began with study of the natural world and used evidence gained from experience or experimentation to support his arguments. Ever since, as Fernández-Armesto puts it, science and sense have kept time together, like voices in a duet that sing different tunes. The combination of theoretical and practical gave Western thinkers an edge over purer reasoning schemes in India and China.

The scientific revolution began when European thinkers broke free from religious authoritarianism and stopped regarding this earth as the center of the universe. They used mathematics along with experimentation and reasoning and developed mechanical tools like the telescope. Fernández-Armesto’s favorite example of their empirical spirit is the grueling Arctic expedition in 1736 in which the French scientist Pierre Moreau de Maupertuis determined (rightly) that the earth was not round like a ball but rather an oblate spheroid…


One of Fernández-Armesto most basic points is that our capacity to apprehend “the truth”– to “know”– has developed throughout history. And history’s not over. So, your correspondent wondered, mightn’t there emerge a fifth source of truth, one rooted in the assessment of vast, ever-more-complete data maps of reality– a fifth way of knowing?

Well, those days may be upon us…

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

he algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a ‘serving algorithm,’ then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”

The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless ‘translate’ a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

“I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”

Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” [Qin’s collaborator Eric] Palmerduca said…

But then, as Edwin Hubble observed, “observations always involve theory,” theory that’s implicit in the particulars and the structure of the data being collected and fed to the AI. So, perhaps this is less a new way of knowing, than a new way of enhancing Fernández-Armesto’s third way– reason– as it became the scientific method…

The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”

In either case: “New machine learning theory raises questions about nature of science.”

Francis Bello


As we experiment with epistemology, we might send carefully-observed and calculated birthday greetings to Georg Joachim de Porris (better known by his professional name, Rheticus; he was born on this date in 1514. A mathematician, astronomer, cartographer, navigational-instrument maker, medical practitioner, and teacher, he was well-known in his day for his stature in all of those fields. But he is surely best-remembered as the sole pupil of Copernicus, whose work he championed– most impactfully, facilitating the publication of his master’s De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres)… and informing the most famous work by yesterday’s birthday boy, Galileo.


%d bloggers like this: