(Roughly) Daily

Posts Tagged ‘AI

“All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason.”*…

Descartes, the original (modern) Rationalist and Immanuel Kant, who did his best to synthesize Descartes’ thought with empiricism (a la Hume)

As Robert Cottrell explains, a growing group of online thinkers couldn’t agree more…

Much of the best new writing online originates from activities in the real world — music, fine art, politics, law…

But there is also writing which belongs primarily to the world of the Internet, by virtue of its subject-matter and of its sensibility. In this category I would place the genre that calls itself Rationalism, the raw materials of which are cognitive science and mathematical logic.

I will capitalise Rationalism and Rationalists when referring to the writers and thinkers who are connected in one way or another with the Less Wrong forum (discussed below). I will do this to avoid confusion with the much broader mass of small-r “rational” thinkers — most of us, in fact — who believe their thinking to be founded on reasoning of some sort; and with “rationalistic” thinkers, a term used in the social sciences for people who favour the generalised application of scientific methods.

Capital-R Rationalism contends that there are specific techniques, drawn mainly from probability theory, by means of which people can teach themselves to think better and to act better — where “better” is intended not as a moral judgement but as a measure of efficiency. Capital-R Rationalism contends that, by recognising and eliminating biases common in human judgement, one can arrive at a more accurate view of the world and a more accurate view of one’s actions within it. When thus equipped with a more exact view of the world and of ourselves, we are far more likely to know what we want and to know how to get it.

Rationalism does not try to substitute for morality. It stops short of morality. It does not tell you how to feel about the truth once you think you have found it. By stopping short of morality it has the best of both worlds: It provides a rich framework for thought and action from which, in principle, one might advance, better equipped, into metaphysics. But the richness and complexity of deciding how to act Rationally in the world is such that nobody, having seriously committed to Rationalism, is ever likely to emerge on the far side of it.

The influence of Rationalism today is, I would say, comparable with that of existentialism in the mid-20th century. It offers a way of thinking and a guide to action with particular attractions for the intelligent, the dissident, the secular and the alienated. In Rationalism it is perfectly reasonable to contend that you are right while the World is wrong.

Rationalism is more of an applied than a pure discipline, so its effects are felt mainly in fields where its adepts tend to be concentrated. By far the highest concentration of Rationalists would appear to cohabit in the study and development of artificial intelligence; so it hardly surprising that main fruit of Rationalism to date has been the birth of a new academic field, existential risk studies, born of a convergence between Rationalism and AI, with science fiction playing catalytic role. Leading figures in existential risk studies include Nicholas Bostrom at Oxford University and Jaan Tallinn at Cambridge University.

Another relatively new field, effective altruism, has emerged from a convergence of Rationalism and Utilitarianism, with the philosopher Peter Singer as catalyst. The leading figures in effective altruism, besides Singer, are Toby Ord, author of The Precipice; William MacAskill, author of Doing Good Better; and Holden Karnofsky, co-founder of GiveWell and blogger at Cold Takes.

A third new field, progress studies, has emerged very recently from the convergence of Rationalism and economics, with Tyler Cowen and Patrick Collison as its founding fathers. Progress studies seeks to identify, primarily from the study of history, the preconditions and factors which underpin economic growth and technological innovation, and to apply these insights in concrete ways to the promotion of future prosperity. The key text of progress studies is Cowen’s Stubborn Attachments

I doubt there is any wholly original scientific content to Rationalism: It is a taker of facts from other fields, not a contributor to them. But by selecting and prioritising ideas which play well together, by dramatising them in the form of thought experiments, and by pursuing their applications to the limits of possibility (which far exceed the limits of common sense), Rationalism has become a contributor to the philosophical fields of logic and metaphysics and to conceptual aspects of artificial intelligence.

Tyler Cowen is beloved of Rationalists but would hesitate (I think) to identify with them. His attitude towards cognitive biases is more like that of Chesterton towards fences: Before seeking to remove them you should be sure that you understand why they were put there in the first place…

From hands-down the best guide I’ve found to the increasingly-impactful ideas at work in Rationalism and its related fields, and to the thinkers behind them: “Do the Right Thing,” from @robertcottrell in @TheBrowser. Eminently worth reading in full.

[Image above: source]

* Immanuel Kant, Critique of Pure Reason

###

As we ponder precepts, we might recall that it was on this date in 1937 that Hormel went public with its own exercise in recombination when it introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.

source

Written by (Roughly) Daily

July 5, 2022 at 1:00 am

“O brave new world, that has such people in ‘t!”*…

The estimable Steven Johnson suggests that the creation of Disney’s masterpiece, Snow White, gives us a preview of what may be coming with AI algorithms sophisticated enough to pass for sentient beings…

… You can make the argument that the single most dramatic acceleration point in the history of illusion occurred between the years of 1928 and 1937, the years between the release of Steamboat Willie [here], Disney’s breakthrough sound cartoon introducing Mickey Mouse, and the completion of his masterpiece, Snow White, the first long-form animated film in history [here— actually the first full-length animated feature produced in the U.S; the first produced anywhere in color]. It is hard to think of another stretch where the formal possibilities of an artistic medium expanded in such a dramatic fashion, in such a short amount of time.

[There follows an fascinating history of the Disney Studios technical innovations that made Snow White possible, and an account of the film;’s remarkable premiere…]

In just nine years, Disney and his team had transformed a quaint illusion—the dancing mouse is whistling!—into an expressive form so vivid and realistic that it could bring people to tears. Disney and his team had created the ultimate illusion: fictional characters created by hand, etched onto celluloid, and projected at twenty-four frames per second, that were somehow so believably human that it was almost impossible not to feel empathy for them.

Those weeping spectators at the Snow White premiere signaled a fundamental change in the relationship between human beings and the illusions concocted to amuse them. Complexity theorists have a term for this kind of change in physical systems: phase transitions. Alter one property of a system—lowering the temperature of a cloud of steam, for instance—and for a while the changes are linear: the steam gets steadily cooler. But then, at a certain threshold point, a fundamental shift happens: below 212 degrees Fahrenheit, the gas becomes liquid water. That moment marks the phase transition: not just cooler steam, but something altogether different.

It is possible—maybe even likely—that a further twist awaits us. When Charles Babbage encountered an automaton of a ballerina as a child in the early 1800s, the “irresistible eyes” of the mechanism convinced him that there was something lifelike in the machine.  Those robotic facial expressions would seem laughable to a modern viewer, but animatronics has made a great deal of progress since then. There may well be a comparable threshold in simulated emotion—via robotics or digital animation, or even the text chat of an AI like LaMDA—that makes it near impossible for humans not to form emotional bonds with a simulated being. We knew the dwarfs in Snow White were not real, but we couldn’t keep ourselves from weeping for their lost princess in sympathy with them. Imagine a world populated by machines or digital simulations that fill our lives with comparable illusion, only this time the virtual beings are not following a storyboard sketched out in Disney’s studios, but instead responding to the twists and turns and unmet emotional needs of our own lives. (The brilliant Spike Jonze film Her imagined this scenario using only a voice.) There is likely to be the equivalent of a Turing Test for artificial emotional intelligence: a machine real enough to elicit an emotional attachment. It may well be that the first simulated intelligence to trigger that connection will be some kind of voice-only assistant, a descendant of software like Alexa or Siri—only these assistants will have such fluid conversational skills and growing knowledge of our own individual needs and habits that we will find ourselves compelled to think of them as more than machines, just as we were compelled to think of those first movie stars as more than just flickering lights on a fabric screen. Once we pass that threshold, a bizarre new world may open up, a world where our lives are accompanied by simulated friends…

Are we in for a phase-shift in our understanding of companionship? “Natural Magic,” from @stevenbjohnson, adapted from his book Wonderland: How Play Made The Modern World.

And for a different, but aposite perspective, from the ever-illuminating L. M. Sacasas (@LMSacasas), see “LaMDA, Lemoine, and the Allures of Digital Re-enchantment.”

* Shakespeare, The Tempest

###

As we rethink relationships, we might recall that it was on this date in 2007 that the original iPhone was introduced. Generally downplayed by traditional technology pundits after its announcement six months earlier, the iPhone was greeted by long lines of buyers around the country on that first day. Quickly becoming a phenomenon, one million iPhones were sold in only 74 days. Since those early days, the ensuing iPhone models have continued to set sales records and have radically changed not only the smartphone and technology industries, but the world in which they operate as well.

The original iPhone

source

“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman

###

As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.”*…

Claude Shannon with his creation, Theseus the maze-solving mouse, an early illustration of machine learning and a follow-on project to the work described below

Readers will know of your correspondent’s fascination with the remarkable Claude Shannon (see here and here), remembered as “the father of information theory,” but seminally involved in so much more. In a recent piece in IEEE Spectrum, the redoubtable Rodney Brooks argues that we should add another credit to Shannon’s list…

Among the great engineers of the 20th century, who contributed the most to our 21st-century technologies? I say: Claude Shannon.

Shannon is best known for establishing the field of information theory. In a 1948 paper, one of the greatest in the history of engineering, he came up with a way of measuring the information content of a signal and calculating the maximum rate at which information could be reliably transmitted over any sort of communication channel. The article, titled “A Mathematical Theory of Communication,” describes the basis for all modern communications, including the wireless Internet on your smartphone and even an analog voice signal on a twisted-pair telephone landline. In 1966, the IEEE gave him its highest award, the Medal of Honor, for that work.

If information theory had been Shannon’s only accomplishment, it would have been enough to secure his place in the pantheon. But he did a lot more…

In 1950 Shannon published an article in Scientific American and also a research paper describing how to program a computer to play chess. He went into detail on how to design a program for an actual computer…

Shannon did all this at a time when there were fewer than 10 computers in the world. And they were all being used for numerical calculations. He began his research paper by speculating on all sorts of things that computers might be programmed to do beyond numerical calculations, including designing relay and switching circuits, designing electronic filters for communications, translating between human languages, and making logical deductions. Computers do all these things today…

The “father of information theory” also paved the way for AI: “How Claude Shannon Helped Kick-start Machine Learning,” from @rodneyabrooks in @IEEESpectrum.

* Claude Shannon (who may or may not have been kidding…)

###

As we ponder possibility, we might send uncertain birthday greetings to Werner Karl Heisenberg; he was born on this date in 1901.  A theoretical physicist, he made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, superconductivity, cosmic rays, and subatomic particles.  But he is most widely remembered as a pioneer of quantum mechanics and author of what’s become known as the Heisenberg Uncertainty Principle.  Heisenberg was awarded the Nobel Prize in Physics for 1932 “for the creation of quantum mechanics.”

During World War II, Heisenberg was part of the team attempting to create an atomic bomb for Germany– for which he was arrested and detained by the Allies at the end of the conflict.  He was returned to Germany, where he became director of the Kaiser Wilhelm Institute for Physics, which soon thereafter was renamed the Max Planck Institute for Physics. He later served as president of the German Research Council, chairman of the Commission for Atomic Physics, chairman of the Nuclear Physics Working Group, and president of the Alexander von Humboldt Foundation.

Some things are so serious that one can only joke about them

Werner Heisenberg

source

“In the attempt to make scientific discoveries, every problem is an opportunity and the more difficult the problem, the greater will be the importance of its solution”*…

(Roughly) Daily is headed into its traditional Holiday hibernation; regular service will begin again very early in the New Year.

It seems appropriate (especially given the travails of this past year) to end the year on a positive and optimistic note, with a post celebrating an extraordinary accomplishment– Science magazine‘s (thus, the AAAS‘) “Breakthrough of the Year” for 2021…

In his 1972 Nobel Prize acceptance speech, American biochemist Christian Anfinsen laid out a vision: One day it would be possible, he said, to predict the 3D structure of any protein merely from its sequence of amino acid building blocks. With hundreds of thousands of proteins in the human body alone, such an advance would have vast applications, offering insights into basic biology and revealing promising new drug targets. Now, after nearly 50 years, researchers have shown that artificial intelligence (AI)-driven software can churn out accurate protein structures by the thousands—an advance that realizes Anfinsen’s dream and is Science’s 2021 Breakthrough of the Year.

AI-powered predictions show proteins finding their shapes: the full story: “Protein structures for all.”

And read Nature‘s profile of the scientist behind the breakthrough: “John Jumper: Protein predictor.”

* E. O. Wilson

###

As we celebrate science, we might send well-connected birthday greetings to Robert Elliot Kahn; he was born on this date in 1938. An electrical engineer and computer scientist, he and his co-creator, Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet. Later, he and Vint, along with fellow computer scientists Lawrence Roberts, Paul Baran, and Leonard Kleinrock, built the ARPANET, the first network to successfully link computers around the country.

Kahn has won the Turing Award, the National Medal of Technology, and the Presidential Medal Of Freedom, among many, many other awards and honors.

source

%d bloggers like this: