Posts Tagged ‘Einstein’
“Reality favors symmetry”*…
Emmy Noether showed that fundamental physical laws are themselves a consequence of simple symmetries. As Shalma Wegsman explains, a century later, her insights continue to shape physics…
In the fall of 1915, the foundations of physics began to crack. Einstein’s new theory of gravity seemed to imply that it should be possible to create and destroy energy, a result that threatened to upend two centuries of thinking in physics.
Einstein’s theory, called general relativity, radically transformed the meaning of space and time. Rather than being fixed backdrops to the events of the universe, space and time were now characters in their own right, able to curve, expand and contract in the presence of matter and energy.
One problem with this shifting space-time is that as it stretches and shrinks, the density of the energy inside it changes. As a consequence, the classical energy conservation law that previously described all of physics didn’t fit this framework. David Hilbert, one of the most prominent mathematicians at the time, quickly identified this issue and set out with his colleague Felix Klein to try to resolve this apparent failure of relativity. After they were stumped, Hilbert passed the problem on to his assistant, the 33-year-old Emmy Noether.
Noether was an assistant in name only. She was already a formidable mathematician when, in early 1915, Hilbert and Klein invited her to join them at the University of Göttingen. But other faculty members objected to hiring a woman, and Noether was blocked from joining the faculty. Regardless, she would spend the next three years prodding the fault line separating physics and mathematics, eventually setting off an earthquake that would shake the foundations of fundamental physics.
In 1918, Noether published the results of her investigations in two landmark theorems. One made sense of conservation laws in small regions of space, a mathematical feat that would later prove important for understanding the symmetries of quantum field theory. The other, now just known as Noether’s theorem, says that behind every conservation law lies a deeper symmetry.
In mathematical terms, a symmetry is something you can do to a system that leaves it unchanged. Consider the act of rotation. If you start with an equilateral triangle, you’ll find that you can rotate it by multiples of 120 degrees without changing how it looks. If you start with a circle, you can rotate it by any angle. These actions without consequences reveal the underlying symmetries of these shapes.
But symmetries go beyond shape. Imagine you do an experiment, then you move 10 meters to the left and do it again. The results of the experiment don’t change, because the laws of physics don’t change from place to place. This is called translation symmetry.
Now wait a few days and repeat your experiment again. The results don’t change, because the laws of physics don’t change as time passes. This is called time-translation symmetry.
Noether started with symmetries like these and explored their mathematical consequences. She worked with established physics using a common mathematical description of a physical system, called a Lagrangian.
This is where Noether’s insight went beyond the symbols on the page. On paper, symmetries seem to have no impact on the physics of the system, since symmetries don’t affect the Lagrangian. But Noether realized that symmetries must be mathematically important, since they constrain how a system can behave. She worked through what this constraint should be, and out of the mathematics of the Lagrangian popped a quantity that can’t change. That quantity corresponds to the physical property that’s conserved. The impact of symmetry had been hiding beneath the equations all along, just out of view.
In the case of translation symmetry, the system’s total momentum should never change. For time-translation symmetry, a system’s total energy is conserved. Noether discovered that conservation laws aren’t fundamental axioms of the universe. Instead, they emerge from deeper symmetries.
The conceptual consequences are hard to overstate. Physicists of the early 20th century were shocked to realize that a system that breaks time-translation symmetry can break energy conservation along with it. We now know that our own universe does this. The cosmos is expanding at an accelerating rate, stretching out the leftover light from the early universe. The process reduces the light’s energy as time passes…
… Noether’s theorem has shaped the quantum world too. In the 1970s, it played a big role in the construction of the Standard Model of particle physics. The symmetries of quantum fields dictate laws that restrict how fundamental particles behave. For instance, a symmetry in the electromagnetic field forces particles to conserve their charge.
The power of Noether’s theorem has inspired physicists to look toward symmetry to discover new physics. Over a century later, Noether’s insights continue to influence the way physicists think…
“How Noether’s Theorem Revolutionized Physics,” from @shalmawegs in @QuantaMagazine.
* Jorge Luis Borges
###
As we contemplate cosmology, we might send insightful birthday greetings to the man who “wrote the book” on perspective, Leon Battista Alberti; he was born on this date in 1404. The archetypical Renaissance humanist polymath, Alberti was an author, artist, architect, poet, priest, linguist, philosopher, cartographer, and cryptographer. He collaborated with Toscanelli on the maps used by Columbus on his first voyage, and he published the the first book on cryptography that contained a frequency table.
But he is surely best remembered as the author of the first general treatise– Della Pictura (1434)– on the the laws of perspective, which built on and extended Brunelleschi’s work to describe the approach and technique that established the science of projective geometry… and fueled the progress of painting, sculpture, and architecture from the Greek- and Arabic-influenced formalism of the High Middle Ages to the more naturalistic (and Latinate) styles of Renaissance.


“Listening to both sides of a story will convince you that there is more to a story than both sides”*…
Regular readers will have deduced that I am something of a techno-optimist. While I worry that human misapplication (exploitation) of new technologies could create new dangers and/or further concentrate wealth and power in too few hands, I believe that emerging tech could– should– help humanity deal with many of its gravest challenges, certainly including climate change. At the same time, I am disposed to thinking about large issues/problems systemically.
Rianne Riemens shares neither of my enthusiasms; she sounds a critical note on techno-optimism, systems thinking– and more specifically, on the application of the latter to the former…
Today, American tech actors express optimistic ideas about how to fix the Earth and halt climate change. Such “green” initiatives have in common that they capture the world in systems and propose large systemic, and mostly technological, solutions. Because of their reliance on techno-fixes, representatives of Silicon Valley express an ideology of ecomodernism, which believes that human progress can be “decoupled” from environmental decline. In this article, I show how “whole-systems thinking” has become a key discursive element in today’s ecomodernist discourses. This discourse has developed from the 1960s onwards – inspired by cybernetic, ecological and computational theories – within the tech culture of California. This paper discusses three key periods in this development, highlighting key publications: the Whole Earth Catalog of the 1960s, the Limits to Growth report in 1972 and the cyberspace manifestoes of the mid 1990s. These periods are key to understand how techno-fixes became a popular answer to the climate crisis, eventually leading to a vision of the world as an ecosystem that can be easily controlled and manipulated, and of technological innovation as harmless and beneficial. I argue that “whole-systems” thinking offers a naive and misleading narrative about the development of the climate crisis, that offers a hopeful yet unrealistic perspective for a future threatened by climate change, built on a misconception of Earth as a datafied planet.
In “The Techno-Optimist Manifesto” (Citation2023) venture capitalist Marc Andreessen argues why we should all be techno-optimists, especially if we are worried about the future impact of the climate crisis. According to Andreessen, promoting unlimited technological progress is the only option: “there is no inherent conflict between the techno-capital machine and the natural environment”. If we generate unlimited clean energy, we can improve the natural environment, whereas a “technologically stagnant society ruins it” (Andreessen, Citation2023). This is possible, he writes, because technologies enable processes of dematerialization and will eventually lead to material abundance. And, “We believe the market economy is a discovery machine, a form of intelligence—an exploratory, evolutionary, adaptive system” (Andreessen, Citation2023). The manifesto thus conceptualizes technology as immaterial and the capitalist economy as an evolutionary system: it presents techno-fixes as a harmless form of environmental action, and economic growth as an inevitable process that political powers should not interfere with.
The “Techno-Optimist Manifesto” is an example of a form of techno-optimism that places full trust in the potential of capitalist tech companies to help humanity “innovate” its way out of a climate crisis. Andreessen (Citation2023) cites historical figures including Buckminster Fuller, Stewart Brand, Douglas Engelbart and Kevin Kelly as the inspiration for his manifesto, showing that the work of these figures and their communities is being remixed and reappropriated into the future visions of contemporary techno-optimists. In this article, I analyse how the belief in the environmental potential of techno-fixes is engrained in the ideology and history of “Silicon Valley” and is discursively constructed through a language of “whole-systems thinking”. I use the concept of whole-systems thinking as a lens to study how simplified notions taken from whole-systems theory and cybernetics played and still play a key role in techno-environmental discourse in the post-war era in the United States. I zoom in on three key events that help explain the origins and evolution of popular whole-systems thinking: the Whole Earth Catalog community led by Stewart Brand in the 1960s, the Limits to Growth report by the Club of Rome in the 1970s and the cyberlibertarian community in the 1990s. I will show how a new language emerged that used simplified notions of systems-thinking to promote the idea that technology would help understand, manage and save a planet in peril.
Through a discourse analysis of primary sources and literature review I present a critical reading of these events in the light of today’s techno-optimistic environmental discourse. My corpus exists of a number of primary sources, including the aforementioned “Techno-Optimist Manifesto” (2023), Limits to Growth report (Meadows et al., Citation1972), editions of the Whole Earth Catalog and CoEvolution Quarterly, Barlow’s Declaration of the Independence of Cyberspace (1996), texts by Kevin Kelly (Citation1998) and Stewart Brand (Citation2009) and An Ecomodernist Manifesto (Asafu-Adjaye et al., Citation2015). I have discursively analysed these sources for their discussion of systems thinking as well as environmental concerns. By analysing how whole-systems thinking became a popular way of addressing environmental issues, I aim to provide a “post-war genealogy” (Pedwell Citation2022) of the term and critique today’s promises about how tech can save the climate. As Johnston (Citation2020) has argued, tracing the development of a cultural perception of trust in techno-fixes reveals a complex and multi-sided history. I claim that the environmental dimension of techno-optimistic discourses requires a critical reconsideration of the ideological underpinnings of Silicon Valley, described as the “Californian Ideology” by Barbrook and Cameron (Citation1996). I will demonstrate how ecomodernism, including its belief that human progress can be “decoupled” from environmental decline, allows us to better understand, and critique, the environmental ideology of Silicon Valley.
I will first expand on contemporary ecomodernism and present my thesis that “decoupling” nature from culture has come to underlie whole-systems thinking in contemporary techno-optimistic discourse. In the following three sections, I highlight a few historical moments to demonstrate the development of the cultural perception of techno-fixes, specifically as a means of managing the environment. I show how whole-systems thinking became popularized by the Whole Earth community, got incorporated in environmental debates through the Limits to Growth report and is reflected in cyberutopian dreams about immaterial societies. Building on my necessarily brief history, I argue that techno-fixes can be strategically presented as ideal solutions if the world and environment are imagined as simple systems and technology as immaterial and harmless. Finally, I return to contemporary US tech culture and argue that it is shaped by, and co-shapes, the ideology of ecomodernism in which nature and culture are decoupled. I conclude that this worldview expresses itself today in corporate visions, resulting in a false hope about how to innovate our way out of the climate crisis…
Eminently worth reading in full (if in the end, as for me, less as a wholesale rejection of techno-optimism and systems thinking than as a cautionary counterweight): “Fixing the earth: whole-systems thinking in Silicon Valley’s environmental ideology,” from @WeAreTandF.
(image above: source)
###
As we tangle with tech, we might pause to remember a man who bridged our understanding of the systems of the world from one paradigm to another: Sir Arthur Stanley Eddington, OM, FRS; he died in this date in 1944. An astrophysicist, mathematician, and philosopher of science known for his work on the motion, distribution, evolution and structure of stars, Eddington is probably best remembered for his relationship to Einstein: he was, via a series of widely-published articles, the primary “explainer” of Einstein’s Theory of General Relativity to the English-speaking world; and he was, in 1919, the leader of the experimental team that used observations of a solar eclipse to confirm the theory.

“Those who are not shocked when they first come across quantum theory cannot possibly have understood it”*…
A scheduling note: your correspondent is headed onto the road for a couple of weeks, so (Roughly) Daily will be a lot more roughly than daily until September 20th or so.
100 years ago, a circle of physicists shook the foundation of science. As Philip Ball explains, it’s still trembling…
In 1926, tensions were running high at the Institute for Theoretical Physics in Copenhagen. The institute was established 10 years earlier by the Danish physicist Niels Bohr, who had shaped it into a hothouse for young collaborators to thrash out a new theory of atoms. In 1925, one of Bohr’s protégés, the brilliant and ambitious German physicist Werner Heisenberg, had produced such a theory. But now everyone was arguing with each other about what it implied for the nature of physical reality itself.
To the Copenhagen group, it appeared reality had come undone…
[Ball tells the story of Niels Bohr’s building on Max Planck, of Werner Heisenberg’s wrangling of Bohr’s thought into theory, of Einstein’s objections and Erwin Schrödinger’s competing theory; then he homes in on the ontological issue at stake…]
Quantum mechanics, they said, demanded we throw away the old reality and replace it with something fuzzier, indistinct, and disturbingly subjective. No longer could scientists suppose that they were objectively probing a pre-existing world. Instead, it seemed that the experimenter’s choices determined what was seen—what, in fact, could be considered real at all.
In other words, the world is not simply sitting there, waiting for us to discover all the facts about it. Heisenberg’s uncertainty principle implied that those facts are determined only once we measure them. If we choose to measure an electron’s speed (more strictly, its momentum) precisely, then this becomes a fact about the world—but at the expense of accepting that there are simply no facts about its position. Or vice versa…
…A century later, scientists are still arguing about this issue of what quantum mechanics means for the nature of reality…
[Ball recounts subsequent attempts to reconcile quantum theory to “reality,” including Schrödinger’s wave mechanics…]
… Schrödinger’s wave mechanics didn’t restore the kind of reality he and Einstein wanted. His theory represented all that could be said about a quantum object in the form of a mathematical expression called the wave function, from which one can predict the outcomes of making measurements on the object. The wave function looks much like a regular wave, like sound waves in air or water waves on the sea. But a wave of what?
At first, Schrödinger supposed that the amplitude of the wave—think of it like the height of a water wave—at a given point in space was a measure of the density of the smeared-out quantum particle there. But Born argued that in fact this amplitude (more precisely, the square of the amplitude) is a measure of the probability that we will find the particle there, if we make a measurement of its position.
This so-called Born rule goes to the heart of what makes quantum mechanics so odd. Classical Newtonian mechanics allows us to calculate the trajectory of an object like a baseball or the moon, so that we can say where it will be at some given time. But Schrödinger’s quantum mechanics doesn’t give us anything equivalent to a trajectory for a quantum particle. Rather, it tells us the chance of getting a particular measurement outcome. It seems to point in the opposite direction of other scientific theories: not toward the entity it describes, but toward our observation of it. What if we don’t make a measurement of the particle at all? Does the wave function still tell us the probability of its being at a given point at a given time? No, it says nothing about that—or more properly, it permits us to say nothing about it. It speaks only to the probabilities of measurement outcomes.
Crucially, this means that what we see depends on what and how we measure. There are situations for which quantum mechanics predicts that we will see one outcome if we measure one way, and a different outcome if we measure the same system in a different way. And this is not, as is sometimes implied (this was the cause of Heisenberg’s row with Bohr), because making a measurement disturbs the object in some physical manner, much as we might very slightly disturb the temperature of a solution in a test-tube by sticking a thermometer into it. Rather, it seems to be a fundamental property of nature that the very fact of acquiring information about it induces a change.
If, then, by reality we mean what we can observe of the world (for how can we meaningfully call something real if it can’t be seen, detected, or even inferred in any way?), it is hard to avoid the conclusion that we play an active role in determining what is real—a situation the American physicist John Archibald Wheeler called the “participatory universe.”..
… Heisenberg’s “uncertainty” captured that sense of the ground shifting. It was not the ideal word—Heisenberg himself originally used the German Ungenauigkeit, meaning something closer to “inexactness,” as well as Unbestimmtheit, which might be translated as “undeterminedness.” It was not that one was uncertain about the situation of a quantum object, but that there was nothing to be certain about.
There was an even more disconcerting implication behind the uncertainty principle. The vagueness of quantum phenomena, when an electron in an atom might seem to jump from one energy state to another at a time of its own choosing, seemed to indicate the demise of causality itself. Things happened in the quantum world, but one could not necessarily adduce a reason why. In his 1927 paper on the uncertainty principle, Heisenberg challenged the idea that causes in nature lead to predictable effects. That seemed to undermine the very foundation of science, and it made the world seem like a lawless, somewhat arbitrary place….
… One of Bohr’s most provocative views was that there is a fundamental distinction between the fuzzy, probabilistic quantum world and the classical world of real objects in real places, where measurements of, say, an electron with a macroscopic instrument tell us that it is here and not there.
What Bohr meant is shocking. Reality, he implied, doesn’t consist of objects located in time and space. It consists of “quantum events,” which are obliged to be self-consistent (in the sense that quantum mechanics can describe them accurately) but not classically consistent with one another. One implication of this, as far as we can currently tell, is that two observers can see different and conflicting outcomes from an event—yet both can be right.
But this rigid distinction between the quantum and classical worlds can’t be sustained today. Scientists can now conduct experiments that probe size scales in between those where quantum and classical rules are thought to apply—neither microscopic (the atomic scale) nor macroscopic (the human scale), but mesoscopic (an intermediate size). We can look, for example, at the behavior of nanoparticles that can be seen and manipulated yet are small enough to be governed by quantum rules. Such experiments confirm the view that there is no abrupt boundary of quantum and classical. Quantum effects can still be observed at these intermediate scales if our devices are sensitive enough, but those effects can be harder to discern as the number of particles in the system increases.
To understand such experiments, it’s not necessary to adopt any particular interpretation of quantum mechanics, but merely to apply the standard theory—encompassed within Schrödinger’s wave mechanics, say—more expansively than Bohr and colleagues did, using it to explore what happens to a quantum object as it interacts with its surrounding environment. In this way, physicists are starting to understand how information gets out of a quantum system and into its environment, and how, as it does so, the fuzziness of quantum probabilities morphs into the sharpness of classical measurement. Thanks to such work, it is beginning to seem that our familiar world is just what quantum mechanics looks like when you are 6 feet tall.
But even if we manage to complete that project of uniting the quantum with the classical, we might end up none the wiser about what manner of stuff—what kind of reality—it all arises from. Perhaps one day another deeper theory will tell us. Or maybe the Copenhagen group was right a hundred years ago that we just have to accept a contingent, provisional reality: a world only half-formed until we decide how it will be…
Eminently worth reading in full: “When Reality Came Undone,” from @philipcball in @NautilusMag.
See also: When We Cease to Understand the World, by Benjamin Labatut.
* Niels Bohr
###
As we wrestle with reality, we might spare a thought for Ludwig Boltzmann; he died on this date in 1906. A physicist and philosopher, he is best remembered for the development of statistical mechanics, and the statistical explanation of the second law of thermodynamics (which connected entropy and probability).
Boltzmann helped paved the way for quantum theory both with his development of statistical mechanics (which is a pillar of modern physics) and with his 1877 suggestion that the energy levels of a physical system could be discrete.
“Few people have the imagination for reality”*…
Experiments that test physics and philosophy as “a single whole,” Amanda Gefter suggests, may be our only route to surefire knowledge about the universe…
Metaphysics is the branch of philosophy that deals in the deep scaffolding of the world: the nature of space, time, causation and existence, the foundations of reality itself. It’s generally considered untestable, since metaphysical assumptions underlie all our efforts to conduct tests and interpret results. Those assumptions usually go unspoken.
Most of the time, that’s fine. Intuitions we have about the way the world works rarely conflict with our everyday experience. At speeds far slower than the speed of light or at scales far larger than the quantum one, we can, for instance, assume that objects have definite features independent of our measurements, that we all share a universal space and time, that a fact for one of us is a fact for all. As long as our philosophy works, it lurks undetected in the background, leading us to mistakenly believe that science is something separable from metaphysics.
But at the uncharted edges of experience — at high speeds and tiny scales — those intuitions cease to serve us, making it impossible for us to do science without confronting our philosophical assumptions head-on. Suddenly we find ourselves in a place where science and philosophy can no longer be neatly distinguished. A place, according to the physicist Eric Cavalcanti, called “experimental metaphysics.”
Cavalcanti is carrying the torch of a tradition that stretches back through a long line of rebellious thinkers who have resisted the usual dividing lines between physics and philosophy. In experimental metaphysics, the tools of science can be used to test our philosophical worldviews, which in turn can be used to better understand science. Cavalcanti, a 46-year-old native of Brazil who is a professor at Griffith University in Brisbane, Australia, and his colleagues have published the strongest result attained in experimental metaphysics yet, a theorem that places strict and surprising constraints on the nature of reality. They’re now designing clever, if controversial, experiments to test our assumptions not only about physics, but about the mind.
While we might expect the injection of philosophy into science to result in something less scientific, in fact, says Cavalcanti, the opposite is true. “In some sense, the knowledge that we obtain through experimental metaphysics is more secure and more scientific,” he said, because it vets not only our scientific hypotheses but the premises that usually lie hidden beneath…
Gefter traces the history of this integrative train of thought (Kant, Duhem, Poincaré, Popper, Einstein, Bell), its potential for helping understand quantum theory… and the prospect of harnessing AI to run the necessary experiments– seemingly comlex and intensive beyond the scope of currenT experimental techniques…
Cavalcanti… is holding out hope. We may never be able to run the experiment on a human, he says, but why not an artificial intelligence algorithm? In his newest work, along with the physicist Howard Wiseman and the mathematician Eleanor Rieffel, he argues that the friend could be an AI algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab. “At some point,” Cavalcanti contends, “we’ll have artificial intelligence that will be essentially indistinguishable from humans as far as cognitive abilities are concerned,” and we’ll be able to test his inequality once and for all.
But that’s not an uncontroversial assumption. Some philosophers of mind believe in the possibility of strong AI, but certainly not all. Thinkers in what’s known as embodied cognition, for instance, argue against the notion of a disembodied mind, while the enactive approach to cognition grants minds only to living creatures.
All of which leaves physics in an awkward position. We can’t know whether nature violates Cavalcanti’s [theorem] — we can’t know, that is, whether objectivity itself is on the metaphysical chopping block — until we can define what counts as an observer, and figuring that out involves physics, cognitive science and philosophy. The radical space of experimental metaphysics expands to entwine all three of them. To paraphrase Gonseth, perhaps they form a single whole…
“‘Metaphysical Experiments’ Probe Our Hidden Assumptions About Reality,” in @QuantaMagazine.
* Johann Wolfgang von Goethe
###
As we examine edges, we might send thoughtful birthday greetings to Rudolf Schottlaender; he was born on this date in 1900. A philosopher who studied with Edmund Husserl, Martin Heidegger, Nicolai Hartmann, and Karl Jaspers, Schottlaender survived the Nazi regime and the persecution of the Jews, hiding in Berlin. After the war, as his democratic and humanist proclivities kept him from posts in philosophy faculties, he distinguished himself as a classical philologist and translator (e.g., new translations of Sophocles which were very effective on the stage, and an edition of Petrarch).
But he continued to publish philosophical and political essays and articles, which he predominantly published in the West and in which he saw himself as a mediator between the systems. Because of his positions critical to East Germany, he was put under close surveillance by the Ministry for State Security (Ministerium für Staatssicherheit or Stasi)– and inspired leading minds of the developing opposition in East Germany.








You must be logged in to post a comment.