(Roughly) Daily

Posts Tagged ‘machine learning

“To sleep: perchance to dream: ay, there’s the rub”*…

I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”

That’s overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.

Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…

Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.

(Image above: Gontzal García del CañoCC BY-NC-SA, modified)

* Shakespeare, Hamlet

###

As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.

source

“Facts alone, no matter how numerous or verifiable, do not automatically arrange themselves into an intelligible, or truthful, picture of the world. It is the task of the human mind to invent a theoretical framework to account for them.”*…

PPPL physicist Hong Qin in front of images of planetary orbits and computer code

… or maybe not. A couple of decades ago, your correspondent came across a short book that aimed to explain how we think know what we think know, Truth– a history and guide of the perplexed, by Felipe Fernández-Armesto (then, a professor of history at Oxford; now, at Notre Dame)…

According to Fernández-Armesto, people throughout history have sought to get at the truth in one or more of four basic ways. The first is through feeling. Truth is a tangible entity. The third-century B.C. Chinese sage Chuang Tzu stated, ”The universe is one.” Others described the universe as a unity of opposites. To the fifth-century B.C. Greek philosopher Heraclitus, the cosmos is a tension like that of the bow or the lyre. The notion of chaos comes along only later, together with uncomfortable concepts like infinity.

Then there is authoritarianism, ”the truth you are told.” Divinities can tell us what is wanted, if only we can discover how to hear them. The ancient Greeks believed that Apollo would speak through the mouth of an old peasant woman in a room filled with the smoke of bay leaves; traditionalist Azande in the Nilotic Sudan depend on the response of poisoned chickens. People consult sacred books, or watch for apparitions. Others look inside themselves, for truths that were imprinted in their minds before they were born or buried in their subconscious minds.

Reasoning is the third way Fernández-Armesto cites. Since knowledge attained by divination or introspection is subject to misinterpretation, eventually people return to the use of reason, which helped thinkers like Chuang Tzu and Heraclitus describe the universe. Logical analysis was used in China and Egypt long before it was discovered in Greece and in India. If the Greeks are mistakenly credited with the invention of rational thinking, it is because of the effective ways they wrote about it. Plato illustrated his dialogues with memorable myths and brilliant metaphors. Truth, as he saw it, could be discovered only by abstract reasoning, without reliance on sense perception or observation of outside phenomena. Rather, he sought to excavate it from the recesses of the mind. The word for truth in Greek, aletheia, means ”what is not forgotten.”

Plato’s pupil Aristotle developed the techniques of logical analysis that still enable us to get at the knowledge hidden within us. He examined propositions by stating possible contradictions and developed the syllogism, a method of proof based on stated premises. His methods of reasoning have influenced independent thinkers ever since. Logicians developed a system of notation, free from the associations of language, that comes close to being a kind of mathematics. The uses of pure reason have had a particular appeal to lovers of force, and have flourished in times of absolutism like the 17th and 18th centuries.

Finally, there is sense perception. Unlike his teacher, Plato, and many of Plato’s followers, Aristotle realized that pure logic had its limits. He began with study of the natural world and used evidence gained from experience or experimentation to support his arguments. Ever since, as Fernández-Armesto puts it, science and sense have kept time together, like voices in a duet that sing different tunes. The combination of theoretical and practical gave Western thinkers an edge over purer reasoning schemes in India and China.

The scientific revolution began when European thinkers broke free from religious authoritarianism and stopped regarding this earth as the center of the universe. They used mathematics along with experimentation and reasoning and developed mechanical tools like the telescope. Fernández-Armesto’s favorite example of their empirical spirit is the grueling Arctic expedition in 1736 in which the French scientist Pierre Moreau de Maupertuis determined (rightly) that the earth was not round like a ball but rather an oblate spheroid…

source

One of Fernández-Armesto most basic points is that our capacity to apprehend “the truth”– to “know”– has developed throughout history. And history’s not over. So, your correspondent wondered, mightn’t there emerge a fifth source of truth, one rooted in the assessment of vast, ever-more-complete data maps of reality– a fifth way of knowing?

Well, those days may be upon us…

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

he algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a ‘serving algorithm,’ then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”

The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless ‘translate’ a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

“I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”

Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” [Qin’s collaborator Eric] Palmerduca said…

But then, as Edwin Hubble observed, “observations always involve theory,” theory that’s implicit in the particulars and the structure of the data being collected and fed to the AI. So, perhaps this is less a new way of knowing, than a new way of enhancing Fernández-Armesto’s third way– reason– as it became the scientific method…

The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”

In either case: “New machine learning theory raises questions about nature of science.”

Francis Bello

###

As we experiment with epistemology, we might send carefully-observed and calculated birthday greetings to Georg Joachim de Porris (better known by his professional name, Rheticus; he was born on this date in 1514. A mathematician, astronomer, cartographer, navigational-instrument maker, medical practitioner, and teacher, he was well-known in his day for his stature in all of those fields. But he is surely best-remembered as the sole pupil of Copernicus, whose work he championed– most impactfully, facilitating the publication of his master’s De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres)… and informing the most famous work by yesterday’s birthday boy, Galileo.

source

“Our goal at DOOM! will be to consider a plurality of futures and then doing everything that we can to prevent nuclear war, oblivion and ruin”*…

Readers may recall a recent post featuring an essay written by GPT-3, a machine-learning language model: “Are Humans Intelligent?- a Salty AI Op-Ed.” Our friends at Nemesis (@nemesis_global; see here) have upped the ante…

The end of trends has been heralded by various outlets for years (see here, here and many more on our Are.na channel).

But COVID time is crazy. We had a hunch that the hype cycle itself was finally in its true death throes – related to economic collapse, popular uprising, a general sense of consumer fatigue, and the breakdown of a consensus reality in which such trends could incubate. Since trends are a temporal phenomenon (they have to start, peak, fade away, typify a time, bottle the zeitgeist, etc.) we began with a simple survey about the breakdown of narrative time, first circulated through our personal social media accounts…

Then we ran the same questions through an online survey distributed to 150 randomly chosen respondents, deployed in collaboration with General Research Laboratories. These responses, which will likely appear in a future memo, ranged from deeply personal to millenarian to an extreme form of ‘new optimism’.

Then our process took a crazier turn. In July 2020, OpenAI released GPT-3 for beta testing – a natural language processing system (colloquially, an “AI”) that uses deep learning to produce human-like text. K Allado-McDowell, writer, co-founder of the Artists + Machine Intelligence program at Google AI and friend of Nemesis, had started doing experimental collaborative writing with GPT-3. By exploring its quirks, K was already building an empirical understanding of GPT-3’s ability to articulate the nature of consciousness, memory, language, and cosmology… We were drawn to the oracular quality of the text generated by GPT-3, and became curious about how it could interact with the material we had gathered.

With the generous help of K – who had quickly become a skilled GPT-3 whisperer – we began feeding it our survey results, in the form of essayistic synopses that summarized the key points of the respondents and quoted choice answers. We left open-ended, future-facing sentence fragments at the end of these and let GPT-3 fill in the rest, like a demented version of Gmail’s suggestive text feature….

As we worked, GPT-3 quickly recognized the genre of our undertaking: a report concerned with the future written by some kind of consultancy, expert group, or think tank. So it inadvertently rebranded us, naming this consultancy DOOM!

What follows is a text collaboratively composed by Nemesis, GPT-3, K Allado-McDowell and our survey respondents, but arguably authored by none of us, per se. Instead you could say this report was written by the “third mind” of DOOM! which spontaneously arose when we began to process this information together with the conscious goal of generating predictions about the future. The outputs of our GPT-3 experiments have been trimmed, edited for grammar, minorly tweaked and ordered into numbered chapters….

An AI-written “report of the future,” eminently worthy of a close reading at (at least) two levels: “The DOOM! Report.”

* GPT-3’s renaming of and mission statement for its “client”

###

As we welcome contemplate centaurs, we might we might send freaky (if not altogether panicked) birthday greetings to John W. “Jack” Ryan; he was born on this date in 1926.  A Yale-trained engineer, Ryan left Raytheon (where he worked on the Navy’s Sparrow III and Hawk guided missiles) to join Mattel.  He oversaw the conversion of the Mattel-licensed “Bild Lili” doll into Barbie (contributing, among other things, the joints that allowed “her” to bend at the waist and the knee) and created the Hot Wheels line.  But he is perhaps best remembered as the inventor of the pull-string, talking voice box that gave Chatty Cathy her voice.

Ryan with his wife, Zsa Zsa Gabor. She was his first only spouse; he, her sixth.

 source

“I am so clever that sometimes I don’t understand a single word of what I am saying”*…

Humans claim to be intelligent, but what exactly is intelligence? Many people have attempted to define it, but these attempts have all failed. So I propose a new definition: intelligence is whatever humans do.

I will attempt to prove this new definition is superior to all previous attempts to define intelligence. First, consider humans’ history. It is a story of repeated failures. First humans thought the Earth was flat. Then they thought the Sun went around the Earth. Then they thought the Earth was the center of the universe. Then they thought the universe was static and unchanging. Then they thought the universe was infinite and expanding. Humans were wrong about alchemy, phrenology, bloodletting, creationism, astrology, numerology, and homeopathy. They were also wrong about the best way to harvest crops, the best way to govern, the best way to punish criminals, and the best way to cure the sick.

I will not go into the many ways humans have been wrong about morality. The list is long and depressing. If humans are so smart, how come they keep being wrong about everything?

So, what does it mean to be intelligent?…

Arram Sabeti (@arram) gave a prompt to GPT-3, a machine-learning language model; it wrote: “Are Humans Intelligent?- a Salty AI Op-Ed.”

(image above: source)

* Oscar Wilde

###

As we hail our new robot overlords, we might recall that it was on this date in 1814 that London suffered “The Great Beer Flood Disaster” when the metal bands on an immense vat at Meux’s Horse Shoe Brewery snapped, releasing a tidal wave of 3,555 barrels of Porter (571 tons– more than 1 million pints), which swept away the brewery walls, flooded nearby basements, and collapsed several adjacent tenements. While there were reports of over twenty fatalities resulting from poisoning by the porter fumes or alcohol coma, it appears that the death toll was 8, and those from the destruction caused by the huge wave of beer in the structures surrounding the brewery.

(The U.S. had its own vat mishap in 1919, when a Boston molasses plant suffered similarly-burst bands, creating a heavy wave of molasses moving at a speed of an estimated 35 mph; it killed 21 and injured 150.)

Meux’s Horse Shoe Brewery

source

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by LW

February 24, 2020 at 1:01 am

%d bloggers like this: