Posts Tagged ‘theory’
“The importance of experimental proof, on the other hand, does not mean that without new experimental data we cannot make advances”*…

Adam Becker explains why demanding that a theory is falsifiable or observable, without any subtlety, will hold science back…
The Viennese physicist Wolfgang Pauli suffered from a guilty conscience. He’d solved one of the knottiest puzzles in nuclear physics, but at a cost. ‘I have done a terrible thing,’ he admitted to a friend in the winter of 1930. ‘I have postulated a particle [the neutrino] that cannot be detected.’
Despite his pantomime of despair, Pauli’s letters reveal that he didn’t really think his new sub-atomic particle would stay unseen. He trusted that experimental equipment would eventually be up to the task of proving him right or wrong, one way or another. Still, he worried he’d strayed too close to transgression. Things that were genuinely unobservable, Pauli believed, were anathema to physics and to science as a whole.
Pauli’s views persist among many scientists today. It’s a basic principle of scientific practice that a new theory shouldn’t invoke the undetectable. Rather, a good explanation should be falsifiable – which means it ought to rely on some hypothetical data that could, in principle, prove the theory wrong. These interlocking standards of falsifiability and observability have proud pedigrees: falsifiability goes back to the mid-20th-century philosopher of science Karl Popper, and observability goes further back than that. Today they’re patrolled by self-appointed guardians, who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics as just so many castles in the sky. The cost of allowing such ideas into science, say the gatekeepers, would be to clear the path for all manner of manifestly unscientific nonsense.
But for a theoretical physicist, designing sky-castles is just part of the job. Spinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work. Some structures might be built up with great care over many years and end up with peculiar names such as inflationary multiverse or superstring theory. Others are fabricated and dismissed casually over the course of a single afternoon, found and lost again by a lone adventurer in the troposphere of thought.
That doesn’t mean it’s just freestyle sky-castle architecture out there at the frontier. The goal of scientific theory-building is to understand the nature of the world with increasing accuracy over time. All that creative energy has to hook back onto reality at some point. But turning ingenuity into fact is much more nuanced than simply announcing that all ideas must meet the inflexible standards of falsifiability and observability. These are not measures of the quality of a scientific theory. They might be neat guidelines or heuristics, but as is usually the case with simple answers, they’re also wrong, or at least only half-right.
alsifiability doesn’t work as a blanket restriction in science for the simple reason that there are no genuinely falsifiable scientific theories. I can come up with a theory that makes a prediction that looks falsifiable, but when the data tell me it’s wrong, I can conjure some fresh ideas to plug the hole and save the theory.
The history of science is full of examples of this ex post facto intellectual engineering…
[Becker recount’s The Story of Herschel’s discovery on Uranus, the challenge it posed to Newtonian gravity, and Einstein’s ultimately saving theory; then returns to Pauli and to Bohr’s attempts to use it to refute the principle of conservation of energy; and finally explores the disagreement among, Boltzmann, Maxwell and Clauisus (on the one hand) and Mach (on the other) over atomic theory. He then considers competing theories for similar outcomes (that’s to say, theories that are observationally identical)…]
… the choices we make between observationally identical theories have a big impact upon the practice of science. The American physicist Richard Feynman pointed out that two wildly different theories that have identical observational consequences can still give you different perspectives on problems, and lead you to different answers and different experiments to conduct in order to discover the next theory. So it’s not just the observable content of our scientific theories that matters. We use all of it, the observable and the unobservable, when we do science. Certainly, we are more wary about our belief in the existence of invisible entities, but we don’t deny that the unobservable things exist, or at least that their existence is plausible.
Some of the most interesting scientific work gets done when scientists develop bizarre theories in the face of something new or unexplained. Madcap ideas must find a way of relating to the world – but demanding falsifiability or observability, without any sort of subtlety, will hold science back. It’s impossible to develop successful new theories under such rigid restrictions. As Pauli said when he first came up with the neutrino, despite his own misgivings: ‘Only those who wager can win.’…
We need madcap ideas: “What is good science?,” from @FreelanceAstro in @aeonmag.
Apposite: Charles Sanders Peirce on “abduction”
* Carlo Rivelli, Reality Is Not What It Seems
###
As we ponder proof, we might spare a thought for Donald William Kerst; he died on this date in 1993. A physicist, he helped develop the experimental approach (and apparatus) that let Enrico Fermi confirm the existence of Pauli’s neutrino (among many other discoveries).
Kerst specialized in plasma physics, and worked on advanced particle accelerator concepts (accelerator physics). He developed the Betatron (1940), the first device to accelerate electrons (“beta particles”) to speeds high enough to have sufficient momentum to produce nuclear transformations in atoms. It influenced all subsequent particle accelerators.

“Ultimately, it is the desire, not the desired, that we love”*…
Or is it? The web– and the world– are awash in talk of the Mimetic Theory of Desire (or Rivalry, as its creator, René Girard, would also have it). Stanford professor (and Philosophy Talk co-host) Joshua Landy weights in with a heavy word of caution…
Here are two readings of Shakespeare’s Hamlet. Which do you think we should be teaching in our schools and universities?
Reading 1. Hamlet is unhappy because he, like all of us, has no desires of his own, and therefore has no being, properly speaking. The best he can do is to find another person to emulate, since that’s the only way anyone ever develops the motivation to do anything. Shakespeare’s genius is to show us this life-changing truth.
Reading 2. Hamlet is unhappy because he, like all of us, is full of body thetans, harmful residue of the aliens brought to Earth by Xenu seventy-five million years ago and disintegrated using nuclear bombs inside volcanoes. Since it is still some time until the practice of auditing comes into being, Hamlet has no chance of becoming “clear”; it is no wonder that he displays such melancholy and aimlessness. Shakespeare’s genius is to show us this life-changing truth.
Whatever you make of the first, I’m rather hoping that you feel at least a bit uncomfortable with the second. If so, I have a follow-up question for you: what exactly is wrong with it? Why not rewrite the textbooks so as to make it our standard understanding of Shakespeare’s play? Surely you can’t fault the logic behind it: if humans have indeed been full of body thetans since they came into existence, and Hamlet is a representation of a human being, Hamlet must be full of body thetans. What is more, if everyone is still full of body thetans, then Shakespeare is doing his contemporaries a huge favor by telling them, and the new textbooks will be doing us a huge favor by telling the world. Your worry, presumably, is that this whole body thetan business is just not true. It’s an outlandish hypothesis, with nothing whatsoever to support it. And since, as Carl Sagan once said, “extraordinary claims require extraordinary evidence,” we would do better to leave it alone.
I think you see where I’m going with this. The fact is, of course, that the first reading is just as outlandish as the second. As I’m about to show (not that it should really need showing), human beings do have desires of their own. That doesn’t mean that all our desires are genuine; it’s always possible to be suckered into buying a new pair of boots, regardless of the fact that they are uglier and shoddier than our old ones, just because they are fashionable. What it means is that some of our desires are genuine. And having some genuine desires, and being able to act on them, is sufficient for the achievement of authenticity. For all we care, Hamlet’s inky cloak could be made by Calvin Klein, his feathered hat by Diane von Furstenberg; the point is that he also has motivations (to know things, to be autonomous, to expose guilt, to have his story told accurately) that come from within, and that those are the ones that count.
To my knowledge, no one in the academy actually reads Hamlet (or anything else) the second way. But plenty read works of literature the first way. René Girard, the founder of the approach, was rewarded for doing so with membership in the Académie française, France’s elite intellectual association. People loved his system so much that they established a Colloquium on Violence and Religion, hosted by the University of Innsbruck, complete with a journal under the ironically apt name Contagion. More recently, Peter Thiel, the co-founder of PayPal, loved it so much that he sank millions of dollars into Imitatio, an institute for the dissemination of Girardian thought. And to this day, you’ll find casual references to the idea everywhere, from people who seem to think it’s a truth, one established by René Girard. (Here’s a recent instance from the New York Times opinion pages: “as we have learned from René Girard, this is precisely how desires are born: I desire something by way of imitation, because someone else already has it.”) All of which leads to an inevitable question: what’s the difference between Girardianism and Scientology? Why has the former been more successful in the academy? Why is the madness of theory so, well, contagious?…
Are we really dependent on others for our desires? Does that mechanism inevitably lead to rivalry, scapegoating, and division? @profjoshlandy suggests not: “Deceit, Desire, and the Literature Professor: Why Girardians Exist,” in @StanfordArcade. Via @UriBram in @TheBrowser. Eminently worth reading in full.
* Friedrich Nietzsche (an inspiration to Girard)
###
As we tease apart theorizing, we might spare a thought for William Whewell; he died on this date in 1866. A scientist, Anglican priest, philosopher, theologian, and historian of science, he was Master of Trinity College, Cambridge.
At a time when specialization was increasing, Whewell was renown for the breadth of his work: he published the disciplines of mechanics, physics, geology, astronomy, and economics, while also finding the time to compose poetry, author a Bridgewater Treatise, translate the works of Goethe, and write sermons and theological tracts. In mathematics, Whewell introduced what is now called the Whewell equation, defining the shape of a curve without reference to an arbitrarily chosen coordinate system. He founded mathematical crystallography and developed a revision of Friedrich Mohs’s classification of minerals. And he organized thousands of volunteers internationally to study ocean tides, in what is now considered one of the first citizen science projects.
But some argue that Whewell’s greatest gift to science was his wordsmithing: He created the words scientist and physicist by analogy with the word artist; they soon replaced the older term natural philosopher. He also named linguistics, consilience, catastrophism, uniformitarianism, and astigmatism.
Other useful words were coined to help his friends: biometry for John Lubbock; Eocine, Miocene and Pliocene for Charles Lyell; and for Michael Faraday, electrode, anode, cathode, diamagnetic, paramagnetic, and ion (whence the sundry other particle names ending -ion).
“The nature of an innovation is that it will arise at a fringe”*…

Alternative media outlets of the Left and Right have become a crucial supplement to our knowledge of the world, providing those perspectives usually ignored by our mainstream media...
From the masthead of the aggressively-inclusive site that means to make those views available, The Unz Review.
The collection is sufficiently vast that your correspondent cannot guarantee against any bias in its eclecticism (indeed, he notes that it is the work of Ron Unz). Still, it’s a remarkable aggregation of theory, opinion, and reportage, from what seems a broad array of points-of-view.
Readers are advised to steel themselves, take a deep breath… then dive in.
Pair with Wikipedia’s Fringe Theory page– and perhaps more interestingly still, their explanation in their editorial guidelines of how they identify and classify fringe theories.
[Image above: source]
* “The nature of an innovation is that it will arise at a fringe where it can afford to become prevalent enough to establish its usefulness without being overwhelmed by the inertia of the orthodox system.” — Kevin Kelly
###
As we iris out, we might recall that it was on this date in 1804 that Corps of Discovery– better known today as the Lewis and Clark Expedition– left Camp Dubois, near Wood River, Illinois, commencing what would be a trek over two years on which they became the first American expedition to cross what is now the western portion of the United States.
President Thomas Jefferson had commissioned the expedition shortly after the Louisiana Purchase (in 1803) to explore and to map the newly acquired territory, to find a practical route across the western half of the continent– a Northwest Passage– and to establish an American presence in this territory before Britain and other European powers tried to claim it.

Meriwether Lewis and William Clark
“Anyone who lives within their means suffers from a lack of imagination”*…
Modern Monetary Theory’s basic principle seems blindingly obvious: Under a fiat currency system, a government can print as much money as it likes. As long as country can mobilize the necessary real resources of labor, machinery, and raw materials, it can provide public services. Our fear of deficits, according to MMT, comes from a profound misunderstanding of the nature of money.
Every five-year-old understands money. It’s what you give the nice lady before she hands you the ice cream cone—an object with intrinsic value that can be redeemed for goods or services. Through the lens of Modern Monetary Theory, however, a dollar is nothing but a liability issued by the US government, which promises to accept it back in payment of taxes. The dollar in your pocket represents a debt owed you by the federal government. Money isn’t a lump of gold but rather an IOU.
This mildly metaphysical distinction ends up having huge practical consequences. It means the federal government, unlike you and me, can’t run out of cash. It can run out of things money can buy—which will drive up their price and be manifest in inflation—but it can’t run out of money. As Sam Levey, a graduate student in economics who tweets under the name Deficit Owls told me, “Macy’s can’t run out of Macy’s gift certificates.”
Especially for those who want the government to provide more services to citizens, this is a convincing argument, and one that can be understood by non-economists…
Everyone knows governments need to tax before they can spend. What Modern Monetary Theory presupposes is, maybe they don’t. Offered for interest (and with no endorsement): “The Radical Theory That the Government Has Unlimited Money.”
* Oscar Wilde
###
As we crank up the printing press, we might recall that it was on this date in 2009, several months into the Great Recession, President Barack Obama met with the CEOs of America’s 13 largest financial institutions to discuss a path out of the economic trough onto which the U.S. had descended. Finding them suspicious of his new (Democratic) administration and worried that he would be less generous to their companies than President Bush and his administration had been, Obama opened by suggesting…
My administration is the only thing between you and the pitchforks… But you need to show that you get that this is a crisis and that everyone has to make some sacrifices…I’m not out there to go after you. I’m protecting you. But if I’m going to shield you from public and congressional anger, you have to give me something to work with on these issues of compensation. Help me help you Everybody has to pitch in. We’re all in this together.
The result was a series of compromises that survived the Obama Administration, but that are now being systematically undone under the Trump Administration.
See also “13 Bankers.”

Kenneth D. Lewis, the chief executive of Bank of America, with other bank executives outside the White House after the meeting with President Obama
“A few scribbles on a blackboard… can change the course of human affairs”*…

What’s the most transformative piece of technology in U.S. classrooms? Smart boards? Laptops? In a 2000 paper on computers in education, Steven D. Krause argues that it’s one that’s been around for nearly two centuries: the blackboard. And he suggests that if we want to understand how teachers adopt technology, we might want to study its history.
To understand the impact of blackboards, Krause writes, we need to consider what schools were like before them. Around 1800, most U.S. schools were one-room log buildings with a fireplace at one end and a single window at the other. “Writing lessons” generally meant students working on their own, whittling goose-quill pens and copying out texts.
When the idea of chalkboards first arrived in the early nineteenth century, they came as a revelation to teachers and education experts. In 1841, one educator declared that the blackboard’s unknown inventor “deserves to be ranked among the best contributors to learning and science, if not among the greatest benefactors of mankind.” Around the same time, another writer praised blackboards for “reflecting the workings, character and quality of the individual mind.”
It’s important to remember that school budgets and student-teacher ratios in the early nineteenth century would seem ludicrous to a modern school district. One teacher might be responsible for hundreds of students, with very little funding for supplies.
Krause writes that one prominent way of using the blackboard to improve education under these circumstances was known as the Lancasterian method, after British educator John Lancaster. Lancaster prescribed particular ways of physically arranging the classroom so that a teacher could work with a large group all at once…
The whole dusty story at “How blackboards transformed American education.” Read Krause’s paper, “‘Among the Greatest Benefactors of Mankind’: What the Success of Chalkboards Tells Us about the Future of Computers in the Classroom,” here.
* Stanislaw Ulam
###
As we clean the erasers, we might send repetitive-but-instructive birthday greetings to Edwin Ray Guthrie; he was born on this date in 1886. A philosopher and mathematician by training, he became a leading behavioral psychologist, specializing in the psychology of learning and more specifically, in the role association plays in acquiring skills. He’s probably best remembered for his belief that all learning is based on a stimulus- response association, instantiated in his Law of Contiguity, which held that “a combination of stimuli which has accompanied a movement, will on its recurrence tend to be followed by that movement.” Movements are, he argued, small stimulus- response combinations; these movements make up an act. Thus, a learned behavior– an act that is consolidated by the learner so that it can be repeated– is, at its root, a series of movements. Guthrie believed that what is learned are the movements (of which the behaviors are simply a result).



You must be logged in to post a comment.