(Roughly) Daily

Archive for the ‘Uncategorized’ Category

“All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason.”*…

Descartes, the original (modern) Rationalist and Immanuel Kant, who did his best to synthesize Descartes’ thought with empiricism (a la Hume)

As Robert Cottrell explains, a growing group of online thinkers couldn’t agree more…

Much of the best new writing online originates from activities in the real world — music, fine art, politics, law…

But there is also writing which belongs primarily to the world of the Internet, by virtue of its subject-matter and of its sensibility. In this category I would place the genre that calls itself Rationalism, the raw materials of which are cognitive science and mathematical logic.

I will capitalise Rationalism and Rationalists when referring to the writers and thinkers who are connected in one way or another with the Less Wrong forum (discussed below). I will do this to avoid confusion with the much broader mass of small-r “rational” thinkers — most of us, in fact — who believe their thinking to be founded on reasoning of some sort; and with “rationalistic” thinkers, a term used in the social sciences for people who favour the generalised application of scientific methods.

Capital-R Rationalism contends that there are specific techniques, drawn mainly from probability theory, by means of which people can teach themselves to think better and to act better — where “better” is intended not as a moral judgement but as a measure of efficiency. Capital-R Rationalism contends that, by recognising and eliminating biases common in human judgement, one can arrive at a more accurate view of the world and a more accurate view of one’s actions within it. When thus equipped with a more exact view of the world and of ourselves, we are far more likely to know what we want and to know how to get it.

Rationalism does not try to substitute for morality. It stops short of morality. It does not tell you how to feel about the truth once you think you have found it. By stopping short of morality it has the best of both worlds: It provides a rich framework for thought and action from which, in principle, one might advance, better equipped, into metaphysics. But the richness and complexity of deciding how to act Rationally in the world is such that nobody, having seriously committed to Rationalism, is ever likely to emerge on the far side of it.

The influence of Rationalism today is, I would say, comparable with that of existentialism in the mid-20th century. It offers a way of thinking and a guide to action with particular attractions for the intelligent, the dissident, the secular and the alienated. In Rationalism it is perfectly reasonable to contend that you are right while the World is wrong.

Rationalism is more of an applied than a pure discipline, so its effects are felt mainly in fields where its adepts tend to be concentrated. By far the highest concentration of Rationalists would appear to cohabit in the study and development of artificial intelligence; so it hardly surprising that main fruit of Rationalism to date has been the birth of a new academic field, existential risk studies, born of a convergence between Rationalism and AI, with science fiction playing catalytic role. Leading figures in existential risk studies include Nicholas Bostrom at Oxford University and Jaan Tallinn at Cambridge University.

Another relatively new field, effective altruism, has emerged from a convergence of Rationalism and Utilitarianism, with the philosopher Peter Singer as catalyst. The leading figures in effective altruism, besides Singer, are Toby Ord, author of The Precipice; William MacAskill, author of Doing Good Better; and Holden Karnofsky, co-founder of GiveWell and blogger at Cold Takes.

A third new field, progress studies, has emerged very recently from the convergence of Rationalism and economics, with Tyler Cowen and Patrick Collison as its founding fathers. Progress studies seeks to identify, primarily from the study of history, the preconditions and factors which underpin economic growth and technological innovation, and to apply these insights in concrete ways to the promotion of future prosperity. The key text of progress studies is Cowen’s Stubborn Attachments

I doubt there is any wholly original scientific content to Rationalism: It is a taker of facts from other fields, not a contributor to them. But by selecting and prioritising ideas which play well together, by dramatising them in the form of thought experiments, and by pursuing their applications to the limits of possibility (which far exceed the limits of common sense), Rationalism has become a contributor to the philosophical fields of logic and metaphysics and to conceptual aspects of artificial intelligence.

Tyler Cowen is beloved of Rationalists but would hesitate (I think) to identify with them. His attitude towards cognitive biases is more like that of Chesterton towards fences: Before seeking to remove them you should be sure that you understand why they were put there in the first place…

From hands-down the best guide I’ve found to the increasingly-impactful ideas at work in Rationalism and its related fields, and to the thinkers behind them: “Do the Right Thing,” from @robertcottrell in @TheBrowser. Eminently worth reading in full.

[Image above: source]

* Immanuel Kant, Critique of Pure Reason

###

As we ponder precepts, we might recall that it was on this date in 1937 that Hormel went public with its own exercise in recombination when it introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.

source

Written by (Roughly) Daily

July 5, 2022 at 1:00 am

“What’s in a name?”*…

“Copi” (nee Carp)

How to rid the Midwest of an invasive aquatic species? As Sarah Kuta explains, the State of Illinois hopes that it can convince its citizens to help…

For decades, invasive species of carp have been wreaking havoc on lakes and waterways in the American Midwest. One way to help tackle the infestation is simply to catch, cook and eat the fish, but many diners turn up their noses when they hear the word carp.

Now, the Illinois Department of Natural Resources and other partners hope that giving the fish a fresh new image will make them more appealing to eat. They’ve given the invasive species a new name, “copi,” in hopes that people will order copi dishes at restaurants or even cook up the fish at home.

… carp began to spread widely when the other four carp species were imported to the United States in the 1960s and ‘70s to eat algae in wastewater treatment plants and aquaculture ponds, as well as to serve as a source of food.

The fish escaped into the Mississippi River, then continued their spread into other rivers and beyond. Their population grew quickly, and they began to crowd out native fish species, outcompeting them for food (different carp species feed on plants, plankton, on up in size to endangered freshwater snail species). Invasive carp are also thought to lower water quality, which ultimately harms underwater ecosystems and can kill off other native species like freshwater mussels. (The fish were once collectively called “Asian carp,” but state governments and federal agencies now refer to them as “invasive carp” because of concerns over bigotry toward Asian culture and people.)

Federal, state and local officials have since spent hundreds of millions of dollars trying to keep the invasive fish in check, and most importantly, out of the Great Lakes. If the fish swim into Lake Michigan, they could threaten the commercial fishing and tourism industries, which together are responsible for billions of dollars of economic activity…

The new name comes from the word “copious,” a nod to the sheer abundance of these fish…

From the Annals of Marketing: “Can Rebranding Invasive Carp Make It More Appealing to Eat?,” from @SarahKuta in @SmithsonianMag.

* Shakespeare, Romeo and Juliet

###

As we dig in, we might recall that on this date in 1862 (88 years after the Declaration of Independence was adopted by the Second Continental Congress on this same date), Charles Lutwidge Dodgson, a young Oxford mathematics don, took the daughters of the Dean of Christ Church College– Alice Liddell and her sisters– on a boating picnic on the River Thames in Oxford.  To amuse the children he told them the story of a little girl, bored by a riverbank, whose adventure begins when she tumbles down a rabbit hole into a topsy-turvy world called “Wonderland.”  The story so captivated the 10-year-old Alice that she begged him to write it down. The result was Alice’s Adventures in Wonderland, published in 1865 under the pen name “Lewis Carroll,” with illustrations by John Tenniel.

source

“History is who we are and why we are the way we are”*…

What a long, strange trip it’s been…

March 12, 1989 Information Management, a Proposal

While working at CERN, Tim Berners-Lee first comes up with the idea for the World Wide Web. To pitch it, he submits a proposal for organizing scientific documents to his employers titled “Information Management, a Proposal.” In this proposal, Berners-Lee sketches out what the web will become, including early versions of the HTTP protocol and HTML.

The first entry a timeline that serves as a table of contents for a series of informative blog posts: “The History of the Web,” from @jay_hoffmann.

* David McCullough

###

As we jack in, we might recall that it was on this date in 1969 that the world first learned of what would become the internet, which would, in turn, become that backbone of the web: UCLA announced it would “become the first station in a nationwide computer network which, for the first time, will link together computers of different makes and using different machine languages into one time-sharing system.” It went on to say that “Creation of the network represents a major forward step in computer technology and may server as the forerunner of large computer networks of the future.”

UCLA will become the first station in a nationwide computer network which, for the first time, will link together computers of different makes and using different machine languages into one time-sharing system.

Creation of the network represents a major forward step in computer technology and may serve as the forerunner of large computer networks of the future.

The ambitious project is supported by the Defense Department’s Advanced Research Project Agency (ARPA), which has pioneered many advances in computer research, technology and applications during the past decade. The network project was proposed and is headed by ARPA’s Dr. Lawrence G. Roberts.

The system will, in effect, pool the computer power, programs and specialized know-how of about 15 computer research centers, stretching from UCLA to M.I.T. Other California network stations (or nodes) will be located at the Rand Corp. and System Development Corp., both of Santa Monica; the Santa Barbara and Berkeley campuses of the University of California; Stanford University and the Stanford Research Institute.

The first stage of the network will go into operation this fall as a subnet joining UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah. The entire network is expected to be operational in late 1970.

Engineering professor Leonard Kleinrock [see here], who heads the UCLA project, describes how the network might handle a sample problem:

Programmers at Computer A have a blurred photo which they want to bring into focus. Their program transmits the photo to Computer B, which specializes in computer graphics, and instructs B’s program to remove the blur and enhance the contrast. If B requires specialized computational assistance, it may call on Computer C for help.

The processed work is shuttled back and forth until B is satisfied with the photo, and then sends it back to Computer A. The messages, ranging across the country, can flash between computers in a matter of seconds, Dr. Kleinrock says.

UCLA’s part of the project will involve about 20 people, including some 15 graduate students. The group will play a key role as the official network measurement center, analyzing computer interaction and network behavior, comparing performance against anticipated results, and keeping a continuous check on the network’s effectiveness. For this job, UCLA will use a highly specialized computer, the Sigma 7, developed by Scientific Data Systems of Los Angeles.

Each computer in the network will be equipped with its own interface message processor (IMP) which will double as a sort of translator among the Babel of computer languages and as a message handler and router.

Computer networks are not an entirely new concept, notes Dr. Kleinrock. The SAGE radar defense system of the Fifties was one of the first, followed by the airlines’ SABRE reservation system. At the present time, the nation’s electronically switched telephone system is the world’s largest computer network.

However, all three are highly specialized and single-purpose systems, in contrast to the planned ARPA system which will link a wide assortment of different computers for a wide range of unclassified research functions.

“As of now, computer networks are still in their infancy,” says Dr. Kleinrock. “But as they grow up and become more sophisticated, we will probably see the spread of ‘computer utilities’, which, like present electronic and telephone utilities, will service individual homes and offices across the country.”

source
Boelter Hall, UCLA

source

Written by (Roughly) Daily

July 3, 2022 at 1:00 am

“If the map doesn’t agree with the ground the map is wrong”*…

Mercator’s depiction of Rupes Nigra

Maps from hundreds of years ago can be surprisingly accurate… or they can just be really, really wrong. Weird maps from history invent lands wholesale, distort entire continents, or attempt to explain magnetism planet-wide. Sometimes the mistakes had a surprising amount of staying power, too, getting passed from map to map over the course of years while there was little chance to independently verify…

Gerardus Mercator, creator of everyone’s favorite map projection, didn’t know what the north pole looked like. No one in his time really did. But they knew that magnetic compasses always pointed north, and so a theory developed: the north pole was marked by a giant magnetic black-rock island.

He quotes a description of the pole in a letter: “In the midst of the four countries is a Whirl-pool, into which there empty these four indrawing Seas which divide the North. And the water rushes round and descends into the Earth just as if one were pouring it through a filter funnel. It is four degrees wide on every side of the Pole, that is to say eight degrees altogether. Except that right under the Pole there lies a bare Rock in the midst of the Sea. Its circumference is almost 33 French miles, and it is all of magnetic Stone (…) This is word for word everything that I copied out of this author [Jacobus Cnoyen] years ago.”

Mercator was not the first or only mapmaker to show the pole as Rupes Nigra, and the concept also tied into fiction and mythology for a while. The idea eventually died out, but people explored the Arctic in hopes of finding a passage through the pole’s seas for years before the pole was actually explored in the 1900s…

See five more confounding charts at “The Weird History of Extremely Wrong Maps.”

And for fascinating explanations of maps with intentional “mistakes,” see: “MapLab: The Legacy of Copyright Traps” and “A map is the greatest of all epic poems.”

* Gordon Livingston

###

As we find our way, we might spare a thought for Thomas Doughty; he was beheaded on this date in 1578. A nobleman, soldier, scholar, and personal secretary of Christopher Hatton, Doughty befriended explorer and state-sponsored pirate Francis Drake, then sailed with him on a 1577 voyage to raid Spanish treasure fleets– a journey that ended for Doughty in a shipboard trial for treason and witchcraft, and his execution.

Although some scholars doubt the validity of the charges of treason, and question Drake’s authority to try and execute Doughty, the incident set an important precedent: according to a history of the English Navy, To Rule the Waves: How the British Navy Shaped the Modern World by Arthur L. Herman, Doughty’s execution established the idea that a ship’s captain was its absolute ruler, regardless of the rank or social class of its passengers.

source

Written by (Roughly) Daily

July 2, 2022 at 1:00 am

“Speed and acceleration are merely the dream of making time reversible”*…

In the early 20th century, there was Futurism…

The Italian Futurists, from the first half of the twentieth century… wanted to drive modernisation in turn-of-the-century Italy at a much faster pace. They saw the potential in machines, and technology, to transform the country, to demand progress. It was not however merely an incrementalist approach they were after: words like annihilation, destruction and apocalypse appear in the writings of the futurists, including the author of The Futurist Manifesto, Filippo Tomasso Marinetti. ‘We want to glorify war – the only cure for the world…’ Marinetti proclaimed – this was not for the faint hearted! That same Marinetti was the founder of the Partito Politico Futuristo in 1918, which became part of Mussolini’s Fascist party in 1919. Things did not go well after that.

Beautiful Ideas Which Kill: Accelerationism, Futurism and Bewilderment

And now, in the early 21st century, there is Accelerationism…

These [politically-motivated mass] killings were often linked to the alt-right, described as an outgrowth of the movement’s rise in the Trump era. But many of these suspected killers, from Atomwaffen thugs to the New Zealand mosque shooter to the Poway synagogue attacker, are more tightly connected to a newer and more radical white supremacist ideology, one that dismisses the alt-right as cowards unwilling to take matters into their own hands.

It’s called “accelerationism,” and it rests on the idea that Western governments are irreparably corrupt. As a result, the best thing white supremacists can do is accelerate their demise by sowing chaos and creating political tension. Accelerationist ideas have been cited in mass shooters’ manifestos — explicitly, in the case of the New Zealand killer — and are frequently referenced in white supremacist web forums and chat rooms.

Accelerationists reject any effort to seize political power through the ballot box, dismissing the alt-right’s attempts to engage in mass politics as pointless. If one votes, one should vote for the most extreme candidate, left or right, to intensify points of political and social conflict within Western societies. Their preferred tactic for heightening these contradictions, however, is not voting, but violence — attacking racial minorities and Jews as a way of bringing us closer to a race war, and using firearms to spark divisive fights over gun control. The ultimate goal is to collapse the government itself; they hope for a white-dominated future after that…

Accelerationism: the obscure idea inspiring white supremacist killers around the world” (and source of the image above)

See also: “A Year After January 6, Is Accelerationism the New Terrorist Threat?

For a look at the “intellectual” roots of accelerationism, see “Accelerationism: how a fringe philosophy predicted the future we live in.”

For a powerful articulation of the dangers of Futurism (and even more, Acclerationism), see “The Perils of Smashing the Past.”

And for a reminder of the not-so-obvious ways that movements like these live on, see “The Intentionally Scandalous 1932 Cookbook That Stands the Test of Time,” on The Futurist Cookbook, by Futurist Manifesto author Filippo Tommaso Marinetti… which foreshadowed the “food as fuel” culinary movements that we see today.

* Jean Baudrillard

###

As we slow down, we might send a “Alles Gute zum Geburtstag” to the polymathic Gottfried Wilhelm Leibniz, the philosopher, mathematician, and political adviser, who was important both as a metaphysician and as a logician, but who is probably best remembered for his independent invention of the calculus; he was born on this date in 1646.  Leibniz discovered and developed differential and integral calculus on his own, which he published in 1684; but he became involved in a bitter priority dispute with Isaac Newton, whose ideas on the calculus were developed earlier (1665), but published later (1687).

As it happens, Leibnitz was a wry and incisive political and cultural observer.  Consider, e.g…

If geometry conflicted with our passions and our present concerns as much as morality does, we would dispute it and transgress it almost as much–in spite of all Euclid’s and Archimedes’ demonstrations, which would be treated as fantasies and deemed to be full of fallacies. [Leibniz, New Essays, p. 95]

28134677537_d79a889e6a_o

 source

%d bloggers like this: