(Roughly) Daily

Posts Tagged ‘Physics

“It can be argued that in trying to see behind the formal predictions of quantum theory we are just making trouble for ourselves”*…

Context, it seems, is everthing…

… What is reality? Nope. There’s no way we are going through that philosophical minefield. Let’s focus instead on scientific realism, the idea that a world of things exists independent of the minds that might perceive it and it is the world slowly revealed by progress in science. Scientific realism is the belief that the true nature of reality is the subject of scientific investigation and while we may not completely understand it at any given moment, each experiment gets us a little bit closer. This is a popular philosophical position among scientists and science enthusiasts.

A typical scientific realist might believe, for example, that fundamental particles exist even though we cannot perceive them directly with our senses. Particles are real and their properties — whatever they may be — form part of the state of the world. A slightly more extreme view is that this state of the world can be specified with mathematical quantities and these, in turn, obey equations we call physical laws. In this view, the ultimate goal of science is to discover these laws. So what are the consequences of quantum physics on these views?

As I mentioned above, quantum physics is not a realistic model of the world — that is, it does not specify quantities for states of the world. An obvious question is then can we supplement or otherwise replace quantum physics with a deeper set of laws about real states of the world? This is the question Einstein first asked with colleagues Podolski and Rosen, making headlines in 1935. The hypothetical real states of the world came to be called hidden variables since an experiment does not reveal them — at least not yet.

In the decades that followed quantum physics rapidly turned into applied science and the textbooks which became canon demonstrated only how to use the recipes of quantum physics. In textbooks that are still used today, no mention is made of the progress in the foundational aspects of quantum physics since the mathematics was cemented almost one hundred years ago. But, in the 1960s, the most important and fundamental aspect of quantum physics was discovered and it put serious restrictions on scientific realism. Some go as far as to say the entire nature of independent reality is questionable due to it. What was discovered is now called contextuality, and its inevitability is referred to as the Bell-Kochen-Specker theorem.

John Bell is the most famous of the trio Bell, Kochen, and Specker, and is credited with proving that quantum physics contained so-called nonlocal correlations, a consequence of quantum entanglement. Feel free to read about those over here.

It was Bell’s ideas and notions that stuck and eventually led to popular quantum phenomena such as teleportation. Nonlocality itself is wildly popular these days in science magazines with reported testing of the concept in delicately engineered experiments that span continents and sometimes involve research satellites. But nonlocality is just one type of contextuality, which is the real game in town.

In the most succinct sentence possible, contextuality is the name for the fact that any real states of the world giving rise to the rules of quantum physics must depend on contexts that no experiment can distinguish. That’s a lot to unpack. Remember that there are lots of ways to prepare the same experiment — and by the same experiment, I mean many different experiments with completely indistinguishable results. Doing the exact same thing as yesterday in the lab, but having had a different breakfast, will give the same experimental results. But there are things in the lab and very close to the system under investigation that don’t seem to affect the results either. An example might be mixing laser light in two different ways.

There are different types of laser light that, once mixed together, are completely indistinguishable from one another no matter what experiments are performed on the mixtures. You could spend a trillion dollars on scientific equipment and never be able to tell the two mixtures apart. Moreover, knowing only the resultant mixture — and not the way it was mixed — is sufficient to accurately predict the outcomes of any experiment performed with the light. So, in quantum physics, the mathematical theory has a variable that refers to the mixture and not the way the mixture was made — it’s Occam’s razor in practice.

Now let’s try to invent a deeper theory of reality underpinning quantum physics. Surely, if we are going to respect Occam’s razor, the states in our model should only depend on contexts with observable consequences, right? If there is no possible experiment that can distinguish how the laser light is mixed, then the underlying state of reality should only depend on the mixture and not the context in which it was made, which, remember, might include my breakfast choices. Alas, this is just not possible in quantum physics — it’s a mathematical impossibility in the theory and has been confirmed by many experiments.

So, does this mean the universe cares about what I have for breakfast? Not necessarily. But, to believe the universe doesn’t care what I had for breakfast means you must also give up reality. You may be inclined to believe that when you observe something in the world, you are passively looking at it just the way it would have been had you not been there. But quantum contextuality rules this out. There is no way to define a reality that is independent of the way we choose to look at it…

Why is no one taught the one concept in quantum physics which denies reality?” It’s called contextuality and it is the essence of quantum physics. From Chris Ferrie (@csferrie).

* “It can be argued that in trying to see behind the formal predictions of quantum theory we are just making trouble for ourselves. Was not precisely this the lesson that had to be learned before quantum mechanics could be constructed, that it is futile to try to see behind the observed phenomena?” – John Stewart Bell


As still we try, we might relatively hearty birthday greetings to Sir Marcus Laurence Elwin “Mark” Oliphant; he was born on this date in 1901. An Australian physicist who trained and did much of his work in England (where he studied under Sir Ernest Rutherford at the University of Cambridge’s Cavendish Laboratory), Oliphant was deeply involved in the Allied war effort during World War II. He helped develop microwave radar, and– by helping to start the Manhattan Project and then working with his friend Ernest Lawrence at the Radiation Laboratory in Berkeley, California, helped develop the atomic bomb.

After the war, Oliphant returned to Australia as the first director of the Research School of Physical Sciences and Engineering at the new Australian National University (ANU); on his retirement, he became Governor of South Australia and helped found the Australian Democrats political party.


“I’m a little tea pot / Short and stout”*…

The original Utah teapot, currently on display at the Computer History Museum in Mountain View, California.

The fascinating story of the “Utah teapot,” the ur-object in the development of computer graphics…

This unassuming object—the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing, dating back to 1974, when computer scientist Martin Newell was a Ph.D. student at the University of Utah.

The U of U was a powerhouse of computer graphics research then, and Newell had some novel ideas for algorithms that could realistically display 3D shapes—rendering complex effects like shadows, reflective textures, or rotations that reveal obscured surfaces. But, to his chagrin, he struggled to find a digitized object worthy of his methods. Objects that were typically used for simulating reflections, like a chess pawn, a donut, and an urn, were too simple.

One day over tea, Newell told his wife Sandra that he needed more interesting models. Sandra suggested that he digitize the shapes of the tea service they were using, a simple Melitta set from a local department store. It was an auspicious choice: The curves, handle, lid, and spout of the teapot all conspired to make it an ideal object for graphical experiment. Unlike other objects, the teapot could, for instance, cast a shadow on itself in several places. Newell grabbed some graph paper and a pencil, and sketched it.

Back in his lab, he entered the sketched coordinates—called Bézier control points, first used in the design of automobile bodies—on a Tektronix storage tube, an early text and graphics computer terminal. The result was a lovely virtual teapot, more versatile (and probably cuter) than any 3D model to date.

The new model was particularly appealing to Newell’s colleague, Jim Blinn [of whom Ivan Sutherland, the head of the program at Utah and a computer graphics pioneer said, “There are about a dozen great computer graphics people and Jim Blinn is six of them”]. One day, demonstrating how his software could adjust an object’s height, Blinn flattened the teapot a bit, and decided he liked the look of that version better. The distinctive Utah teapot was born.

The computer model proved useful for Newell’s own research, featuring prominently in his next few publications. But he and Blinn also took the important step of sharing their model publicly. As it turned out, other researchers were also starved for interesting 3D models, and the digital teapot was exactly the experimental test bed they needed. At the same time, the shape was simple enough for Newell to input and for computers to process. (Rumor has it some researchers even had the data points memorized!) And unlike many household items, like furniture or fruit-in-a-bowl, the teapot’s simulated surface looked realistic without superimposing an artificial, textured pattern.

The teapot quickly became a beloved staple of the graphics community. Teapot after teapot graced the pages and covers of computer graphics journals.  “Anyone with a new idea about rendering and lighting would announce it by first trying it out on a teapot,” writes animator Tom Sito in Moving Innovation...

These days, the Utah teapot has achieved legendary status. It’s a built-in shape in many 3D graphics software packages used for testing, benchmarking, and demonstration. Graphics geeks like to sneak it into scenes and games as an in-joke, an homage to their countless hours of rendering teapots; hence its appearances in Windows, Toy Story, and The Simpsons

Over the past few years, the teapot has been 3D printed back into the physical world, both as a trinket and as actual china. Pixar even made its own music video in honor of the teapot, titled “This Teapot’s Made for Walking,” and a teapot wind-up toy as a promotion for its Renderman software.

Newell has jokingly lamented that, despite all his algorithmic innovations, he’ll be remembered primarily for “that damned teapot.” But as much as computer scientists try to prove their chops by inventing clever algorithms, test beds for experimentation often leave a bigger mark. Newell essentially designed the model organism of computer graphics: to graphics researchers as lab mice are to biologists.

For the rest of us the humble teapot serves as a reminder that, in the right hands, something simple can become an icon of creativity and hidden potential…

How a humble serving piece shaped a technological domain: “The Most Important Object In Computer Graphics History Is This Teapot,” from Jesse Dunietz (@jdunietz)

* from “I’m a Little Tea Pot,” a 1939 novelty song by George Harold Sanders and Clarence Z. Kelley


As we muse on models, we might send foundational birthday greetings to Michael Faraday; he was born on this date in 1791. One of the great experimental scientists of all time, Faraday made huge contributions to the study of electromagnetism and electrochemistry.

Although Faraday received little formal education, he was one of the most influential scientists in history. It was by his research on the magnetic field around a conductor carrying a direct current that Faraday established the basis for the concept of the electromagnetic field in physics. Faraday also established that magnetism could affect rays of light and that there was an underlying relationship between the two phenomena. He similarly discovered the principles of electromagnetic induction and diamagnetism, and the laws of electrolysis. His inventions of electromagnetic rotary devices formed the foundation of electric motor technology, and it was largely due to his efforts that electricity became practical for use in technology [including, of course, computing and computer graphics].

As a chemist, Faraday discovered benzene, investigated the clathrate hydrate of chlorine, invented an early form of the Bunsen burner and the system of oxidation numbers, and popularised terminology such as “anode“, “cathode“, “electrode” and “ion“. Faraday ultimately became the first and foremost Fullerian Professor of Chemistry at the Royal Institution, a lifetime position.

Faraday was an excellent experimentalist who conveyed his ideas in clear and simple language; his mathematical abilities, however, did not extend as far as trigonometry and were limited to the simplest algebra. James Clerk Maxwell took the work of Faraday and others and summarized it in a set of equations which is accepted as the basis of all modern theories of electromagnetic phenomena. On Faraday’s uses of lines of force, Maxwell wrote that they show Faraday “to have been in reality a mathematician of a very high order – one from whom the mathematicians of the future may derive valuable and fertile methods.”…

Albert Einstein kept a picture of Faraday on his study wall, alongside pictures of Arthur Schopenhauer and James Clerk Maxwell. Physicist Ernest Rutherford stated, “When we consider the magnitude and extent of his discoveries and their influence on the progress of science and of industry, there is no honour too great to pay to the memory of Faraday, one of the greatest scientific discoverers of all time.”



“This potential possibility need only play a role as a counterfactual, according to quantum theory, for it to have an actual effect!”*…

Contemplate counterfactuals: things that have not happened — but could happen — a neglected area of scientific theory…

If you could soar high in the sky, as red kites often do in search of prey, and look down at the domain of all things known and yet to be known, you would see something very curious: a vast class of things that science has so far almost entirely neglected. These things are central to our understanding of physical reality, both at the everyday level and at the level of the most fundamental phenomena in physics — yet they have traditionally been regarded as impossible to incorporate into fundamental scientific explana­tions. They are facts not about what is — the ‘actual’ — but about what could or could not be. In order to distinguish them from the ac­tual, they are called counterfactuals.

Suppose that some future space mission visited a remote planet in another solar system, and that they left a stainless-steel box there, containing among other things the critical edition of, say, William Blake’s poems. That the poetry book is subsequently sit­ting somewhere on that planet is a factual property of it. That the words in it could be read is a counterfactual property, which is true regardless of whether those words will ever be read by anyone. The box may be never found; and yet that those words could be read would still be true — and laden with significance. It would signify, for instance, that a civilization visited the planet, and much about its degree of sophistication.

To further grasp the importance of counterfactual properties, and their difference from actual properties, imagine a computer programmed to produce on its display a string of zeroes. That is a factual property of the computer, to do with its actual state — with what is. The fact that it could be reprogrammed to output other strings is a counterfactual property of the computer. The computer may never be so programmed; but the fact that it could is an essential fact about it, without which it would not qualify as a computer.

The counterfactuals that matter to science and physics, and that have so far been neglected, are facts about what could or could not be made to happen to physical systems; about what is possible or impossible. They are fundamental because they express essential features of the laws of physics — the rules that govern every system in the universe. For instance, a counterfactual property imposed by the laws of physics is that it is impossible to build a perpetual motion machine. A perpetual motion machine is not simply an object that moves forever once set into motion: it must also gener­ate some useful sort of motion. If this device could exist, it would produce energy out of no energy. It could be harnessed to make your car run forever without using fuel of any sort. Any sequence of transformations turning something without energy into some thing with energy, without depleting any energy supply, is impos­sible in our universe: it could not be made to happen, because of a fundamental law that physicists call the principle of conservation of energy.

Another significant counterfactual property of physical sys­tems, central to thermodynamics, is that a steam engine is possible. A steam engine is a device that transforms energy of one sort into energy of a different sort, and it can perform useful tasks, such as moving a piston, without ever violating that principle of conserva­tion of energy. Actual steam engines (those that have been built so far) are factual properties of our universe. The possibility of build­ing a steam engine, which existed long before the first one was actually built, is a counterfactual.

So the fundamental types of counterfactuals that occur in physics are of two kinds: one is the impossibility of performing a transformation (e.g., building a perpetual motion machine); the other is the possibility of performing a transformation (e.g., building a steam engine). Both are cardinal properties of the laws of phys­ics; and, among other things, they have crucial implications for our endeavours: no matter how hard we try, or how ingeniously we think, we cannot bring about transformations that the laws of physics declare to be impossible — for example, creating a per­petual motion machine. However, by thinking hard enough, we can come up with more and better ways of performing a pos­sible transformation — for instance, that of constructing a steam engine — which can then improve over time.

In the prevailing scientific worldview, counterfactual proper­ties of physical systems are unfairly regarded as second-class citi­zens, or even excluded altogether. Why? It is because of a deep misconception, which, paradoxically, originated within my own field, theoretical physics. The misconception is that once you have specified everything that exists in the physical world and what happens to it — all the actual stuff — then you have explained every­thing that can be explained. Does that sound indisputable? It may well. For it is easy to get drawn into this way of thinking with­out ever realising that one has swallowed a number of substantive assumptions that are unwarranted. For you can’t explain what a computer is solely by specifying the computation it is actually per­forming at a given time; you need to explain what the possible com­putations it could perform are, if it were programmed in possible ways. More generally, you can’t explain the presence of a lifeboat aboard a pirate ship only in terms of an actual shipwreck. Everyone knows that the lifeboat is there because of a shipwreck that could happen (a counterfactual explanation). And that would still be the reason even if the ship never did sink!

Despite regarding counterfactuals as not fundamental, science has been making rapid, relentless progress, for example, by devel­oping new powerful theories of fundamental physics, such as quantum theory and Einstein’s general relativity; and novel expla­nations in biology — with genetics and molecular biology — and in neuroscience. But in certain areas, it is no longer the case. The assumption that all fundamental explanations in science must be expressed only in terms of what happens, with little or no refer­ence to counterfactuals, is now getting in the way of progress. For counterfactuals are essential to a number of things that are cur­rently explained only vaguely in science, or not explained at all. Counterfactuals are central to an exact, unified theory of heat, work, and information (both classical and quantum); to explain mat­ters such as the appearance of design in living things; and to a sci­entific explanation of knowledge…

An excerpt from Chiara Marletto‘s The Science of Can and Can’t: A Physicist’s Journey Through the Land of Counterfactuals, via the invaluable @delanceyplace.

[Image above: source]

* Roger Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness


As we ponder the plausible, we might send superlatively speculative birthday greetings to an accomplished counterfactualist, H.G. Wells; he was born on this date in 1866.  A prolific writer of novels, history, political and social commentary, textbooks, and rules for war games, Wells is best remembered (with Jules Verne and Hugo Gernsback) as “the father of science fiction” for his “scientific romances”– The War of the WorldsThe Time MachineThe Invisible Man, The Island of Doctor Moreau, et al.



The wharves of Manhattan, 1851: “There now is your insular city of the Manhattoes, belted round by wharves
as Indian isles by coral reefs.”

I first encountered the work of Peter Gorman via his glorious book Barely Maps (a gift from friend MK). Early in the pandemic, Peter picked up Moby Dick

I read Moby-Dick in April 2020. For weeks afterward, I couldn’t stop thinking about it. I started making maps and diagrams as a way to figure it out.

Moby-Dick is infamous for its digressions. Throughout the book, the narrator disrupts the plot with contemplations, calculations, and categorizations. He ruminates on the White Whale, and the ocean, and human psychology, and the night sky, and how it all relates back to the mystery of the unknown. His narration feels like a twisting- turning struggle to explain everything.

Reading Moby-Dick actually made me feel like that—like I’d mentally absorbed its spin-cycle style. I developed a case of “Kaleidoscope Brain.” The maps I was making were obsessive and encyclopedic. They were newer and weirder and they digressed beyond straightforward geography…

Ocean currents, February- U.K. Admiralty Navigation Manual, Volume 1: “There is, one knows not what sweet mystery about this sea, whose
gently awful stirrings seem to speak of some hidden soul beneath.”

Moby Dick, mapped and charted: Kaleidoscope Brain, from @barelymaps. It’s a free pdf download, though one has the opportunity– well-taken– to become a Patreon sponsor.

* Headline in New York Day Book, September 8, 1852


As we wonder about white whales, we might recall that it was on this date in 2008 that the Large Hadron Collider at CERN was first powered up. The world’s largest and highest-energy particle collider, it is devoted to searching for the new particles predicted by supersymmetry theories, and to exploring other unresolved questions in particle physics (e.g. the Higgs boson)… that’s to say, to mapping and charting existence.

A section of the LHC


A “map” of a proton-proton collision inside the Large Hadron Collider that has characteristics of a Higgs decaying into two bottom quarks.


“Supersymmetry was (and is) a beautiful mathematical idea. The problem with applying supersymmetry is that it is too good for this world.”*…

Physicists reconsider their options…

A wise proverb suggests not putting all your eggs in one basket. Over recent decades, however, physicists have failed to follow that wisdom. The 20th century—and, indeed, the 19th before it—were periods of triumph for them. They transformed understanding of the material universe and thus people’s ability to manipulate the world around them. Modernity could not exist without the knowledge won by physicists over those two centuries.

In exchange, the world has given them expensive toys to play with. The most recent of these, the Large Hadron Collider (LHC), which occupies a 27km-circumference tunnel near Geneva and cost $6bn, opened for business in 2008. It quickly found a long-predicted elementary particle, the Higgs boson, that was a hangover from calculations done in the 1960s. It then embarked on its real purpose, to search for a phenomenon called Supersymmetry.

This theory, devised in the 1970s and known as Susy for short, is the all-containing basket into which particle physics’s eggs have until recently been placed. Of itself, it would eliminate many arbitrary mathematical assumptions needed for the proper working of what is known as the Standard Model of particle physics. But it is also the vanguard of a deeper hypothesis, string theory, which is intended to synthesise the Standard Model with Einstein’s general theory of relativity. Einstein’s theory explains gravity. The Standard Model explains the other three fundamental forces—electromagnetism and the weak and strong nuclear forces—and their associated particles. Both describe their particular provinces of reality well. But they do not connect together. String theory would connect them, and thus provide a so-called “theory of everything”.

String theory proposes that the universe is composed of minuscule objects which vibrate in the manner of the strings of a musical instrument. Like such strings, they have resonant frequencies and harmonics. These various vibrational modes, string theorists contend, correspond to various fundamental particles. Such particles include all of those already observed as part of the Standard Model, the further particles predicted by Susy, which posits that the Standard Model’s mathematical fragility will go away if each of that model’s particles has a heavier “supersymmetric” partner particle, or “sparticle”, and also particles called gravitons, which are needed to tie the force of gravity into any unified theory, but are not predicted by relativity.

But, no Susy, no string theory. And, 13 years after the LHC opened, no sparticles have shown up. Even two as-yet-unexplained results announced earlier this year (one from the LHC and one from a smaller machine) offer no evidence directly supporting Susy. Many physicists thus worry they have been on a wild-goose chase…

Bye, bye little Susy? Supersymmetry isn’t (so far, anyway) proving out; and prospects look dim. But a similar fallow period in physics led to quantum theory and relativity: “Physics seeks the future.”

Frank Wilczek


As we ponder paradigms, we might send insightful birthday greetings to Friedrich Wilhelm Ostwald; he was born on this date in 1853. A chemist and philosopher, he made many specific contributions to his field (including advances on atomic theory), and was one of the founders of the of the field of physical chemistry. He won the Nobel Prize in 1909.

Following his retirement in 1906 from academic life, Ostwald became involved in philosophy, art, and politics– to each of which he made significant contributions.


%d bloggers like this: