Posts Tagged ‘Physics’
“The mind is not a vessel to be filled, but a fire to be kindled”*…
(Roughly) Daily is, in effect, a kind of notebook, a commonplace book. So it will be no surprise that your correspondent found today’s featured piece fascinating.
Jillian Hess, a professor who studies the history of note-taking, shares the lessons she took from her review of the papers of the remarkable Richard Feynman…
Formal education, at its best, prepares us for a life of learning. After all, we are only in school for a fraction of our lives and there is so much to learn!
Richard Feynman (1918-1988) understood the value of self-education. He was a Nobel Prize-winning theoretical physicist, a member of the Manhattan Project at the age of 25, and a dynamic public intellectual who never stopped learning.
Often touted as one of history’s greatest learners, Feynman taught himself a dizzying amount of science. I wanted to see his notes for myself—to observe the great autodidact thinking on the page. So, I visited his archives at Caltech in February…
… In the archives, I saw… for myself: Feynman’s notebooks contain imprints of thinking in real-time—the work as it happened. They were instruments for thinking through uncertainty.
What follows is a list of note-taking principles for self-education that I gathered while studying Feynman’s notebooks.
Start with First Principles: Feynman’s “Things I Don’t Know About” Notebook
Discussions about Feynman’s learning process usually draw from this notebook, which he compiled as a Ph.D. student at Princeton. The contents include mechanics, mathematical methods, and thermodynamics. Clearly, he knew something about these topics, but he found his understanding superficial. So, his response was to take the subject apart—to break it down into “the essential kernels” …
[Hess illustrates this principle, then unpacks two others: “create a reading index” and “keep learning.” She continues…]
… Uncertainty is Interesting
This is my biggest takeaway: We should fear certainty more than doubt. Learning to live with uncertainty is an essential aspect of learning, as Feynman said in 1981:
You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong.
And then, in an echo of his “Notebook of Things I Know Nothing About,” compiled four decades prior, he adds:
…I’m not absolutely sure of anything, and there are many things I don’t know anything about.
If a man as celebrated for his genius as Feynman felt that way, certainly the rest of us have a lot more to learn…
[And she concludes…]
… Notes on Feynman’s Notes:
Use notes to think: Feynman didn’t think through problems in his head and then turn to his notebooks. Instead, he used his notebooks to think through problems. His thought process required paper.
Start with first principles: “Why” is a very powerful question. And asking why can lead us back to the fundamentals and help us understand them in an entirely new light. This applies to any subject. Feynman has helped me think of note-taking as a kind of expedition. Use your notes to dig deeper into topics you think you already understand.
Never stop learning: How wonderful would it be if we could hold onto the excitement of learning we had as children? After all, the world didn’t get less interesting. It’s worth returning to the note-taking methods you used in school to see if they are still useful in adulthood. I particularly like Feynman’s high school method of taking 30 minutes to understand a subject before he allowed himself to take notes on it.
[Then leaves us with the man himself, “in all his radiant, enthusiastic, brilliance”…]
On “Richard Feynman’s Notes For Self-Education.”
Pair with: “Curiosity Is No Solo Act“: “it gains its real power when embedded in webs of relationship and shared meaning-making”… something that Feynman’s life also demonstrated (as you can see in his autobiography and/or in James Gleick‘s biography, Genius)
* Plutarch
###
As we light that fire, we might spare a thought for Jeremy Bernstein; he died on this date last year. A physicist who woked on nuclear propulsion for Project Orion and held research and teaching positions at Stevens Institute of Technology, the Institute for Advanced Study, Brookhaven National Laboratory, CERN, Oxford University, University of Islamabad, and École Polytechnique, he is better remembered as a gifted popular science writer and profiler of scientists.
Bernstein wrote 30 books, and scores of magazine articles for “general readers”– for The New Yorker, where he was a staff writer from 1961 to 1995, and for The Atlantic Monthly, the New York Review of Books, and Scientific American, among others.
Of Feynman, Bernstein wrote “[his] Mozartean genius in physics seemed to be combined with an almost equally Mozartean urge to play the clown.” (in which, of course, Feynman was in the good company of Einstein, Claude Shannon, and others :-)
“For what man in the natural state or course of thinking did ever conceive it in his power to reduce the notions of all mankind exactly to the same length, and breadth, and height of his own? Yet this is the first humble and civil design of all innovators in the empire of reason.”*…
A “theory of everything” (a Grand Unified Theory on steriods)– a (still hypothetical) coherent theoretical framework of physics containing and explaining all physical principles– is the holy grail of physicists. Natalie Wolchover checks in on the most recent front-runner in the hunt…
Fifty-eight years after it first appeared, string theory remains the most popular candidate for the “theory of everything,” the unified mathematical framework for all matter and forces in the universe. This is much to the chagrin of its rather vocal critics. “String theory is not dead; it’s undead and now walks around like a zombie eating people’s brains,” the former physicist Sabine Hossenfelder said on her popular YouTube channel in 2024.
String theory is a “failure,” the mathematical physicist and blogger Peter Woit often says. His complaint is not that string theory is wrong — it’s that it’s “not even wrong,” as he titled a 2006 book. The theory says that, on scales of billionths of trillionths of trillionths of a centimeter, extra curled-up spatial dimensions reveal themselves and particles resolve into extended objects — strands and loops of energy — rather than points. But this alleged substructure is too small to detect, probably ever. The prediction is untestable.
A further problem is that uncountably many different configurations of dimensions and strings are permitted at those tiny scales; the theory can give rise to a limitless variety of universes. Amid this vast landscape of solutions, no one can hope to find a precise microscopic configuration that undergirds our particular macroscopic world.
These issues are profound indeed. Yet in my experience, the typical high-energy theorist in a prestigious university physics department still thinks string theory has a good chance of being correct, at least in part. The field has become siloed between those who deem it worth studying and those who don’t.
Recently, a new angle of attack has opened up. An approach called bootstrapping has allowed physicists to calculate that, under various starting assumptions about the universe, a key equation from string theory naturally follows. For some experts, these findings support the notion of “string uniqueness,” the idea that it is the only mathematically consistent quantum description of gravity and everything else.
Responding to one bootstrap paper on her YouTube channel, mere weeks after the “undead” comment, Hossenfelder said it was “string theorists do[ing] something sensible for once.” She added, “I’d say this paper strengthens the argument for string theory.”
Not everyone agrees, but the findings are reviving an important question. “This question of ‘Does string theory describe the world?’ has just been so taboo,” said Cliff Cheung, a physicist at the California Institute of Technology and an author of the paper discussed by Hossenfelder. Now, “people are actually thinking about it for the first time in decades.”
Getting wind of this work, I wanted to drill down on the logic and examine how the string hypothesis is faring these days…
And so she does: “Are Strings Still Our Best Hope for a Theory of Everything?” from @nattyover.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.
Compare/contrast with: “Where Some See Strings, She Sees a Space-Time Made of Fractals.”
* Jonathan Swift, A Tale of a Tub
###
As we grapple with Godel, we might spare a thought for Hermann Rorschach; he died on this date in 1922. A psychiatrist and psychoanalyst, his education in art helped to spur the development of a set of inkblots that were used experimentally to measure various unconscious parts of the subject’s personality. Rorschach knew the human tendency to project interpretations and feelings onto ambiguous stimuli and believed that the subjective responses of his subjects enabled him to distinguish among them on the basis of their perceptive abilities, intelligence, and emotional characteristics. His method has come to be known as the Rorschach test, iterations of which have continued to be used over the years to help identify personality, psychotic, and neurological disorders.
Perhaps his insight that we humans tend “to project interpretations and feelings onto ambiguous stimuli” can inform our understanding of physicists trying to construct mental/conceptual models of our reality, which they’ve been doing for a very long time, and of the limitations of that quest.
“What is really amazing, and frustrating, is mankind’s habit of refusing to see the obvious and inevitable until it is there, and then muttering about unforeseen catastrophes”*…

One of the effectively-secret ingredients in the world’s economic growth over the last couple of centuries has been insurance. The ability to insure against catastrophic loss has underwritten (pun intended) the trillions and trillions of dollars of loans that have funded the construction and acquisition that has enabled the growth of both commercial endeavor and the the accumulation of personal wealth (directly through home ownership and indirectly through equity ownership in those commercial endeavors or participation in pension schemes that own that equity).
But in a way that was enitrely predictable, climate change is rendering a growing portion of the world uninsurable. Gavin Evans ponders what that might mean…
The Florida peninsula looks like a sore thumb. It juts into the Gulf of Mexico and the Atlantic, where the water is getting warmer year on year, prompting fiercer hurricanes that can blow down houses like collapsing decks of cards. Climate scientists are convinced all hell will break loose sooner or later when a monster-sized, property-destroying storm makes a direct hit on Miami or Tampa-St Petersburg. Given three near-misses in the recent past, the experts view such a calamity as inevitable. It’s a huge risk for anyone living there – they stand to lose everything – but also for those bearing the financial side of this risk, the insurance companies. Some in the industry are seeing this as a portent for their future – an impending existential threat with profound implications for the economic system.
There are no easy solutions for people still paying off mortgages and those who want to buy property along the Florida coast, because the potential payout on the back of a mammoth storm is so high that the reinsurers (who insure the insurers against catastrophe) are refusing to underwrite their clients and, with no reinsurance, there’s no insurance; and with no insurance, no mortgages; and with no mortgages, no property market. Insurance protects investments against loss and is therefore a pillar of the economic system. If it goes, economies are destabilised.
Many panicked homeowners have rushed to make their houses less risky for insurance companies by reinforcing their roofs with hurricane clips, installing impact-resistant windows, doors and shutters, and strengthening their foundations. But it’s not just storms and higher, warmer seas that concern insurers. Rising temperatures mean that the frequency, range and ferocity of wildfires are also on the rise.
So far this year, 3,374 wildfires have burned an area of Florida totalling 231,172 acres (at the time of writing), and it is even worse in California where 7,855 blazes have killed at least 31 people, destroyed more than 17,000 houses and devoured 525,208 acres of land, at an estimated cost of more than $250 billion. Here, too, homeowners rushed to make their properties more palatable to cold-footed insurers – clearing their surroundings of anything flammable, covering yards with gravel, sheathing houses with fire-resistant stucco, and replacing wooden roofs with steel.
But, even for the most diligent, insurance companies have turned tail, dumping existing clients and abandoning fire-prone and storm-prone areas altogether. On the Californian fire front, 2024 was a turning point as several insurers ceased issuing new policies because of fire-associated risks, including the United States’ biggest property insurer, State Farm, which cancelled policies in parts of Los Angeles. It is all too easy to view this cynically, but it’s happening because property insurers have been reporting year-on-year losses from climate change-related payouts.
Insurance companies survive by making more money from covering risk than they lose from these risks, which is why they prefer clients less likely to claim (insofar as they can predict the risk involved) and require them to pay substantial excess to discourage claims. When payouts rise above the premium intake, insurance companies either hike up these premiums or withdraw. But when that risk is considered catastrophic, potentially affecting many thousands of clients, as with Floridian storms and Californian fires, it is the reinsurers who are the first to retreat because they will ultimately bear most of the cost.
Reinsurers aggregate payout patterns to establish the likelihood of having to make huge payouts from future natural catastrophes. They do this by gathering exposure data from existing insurers in a geographical area, and by examining catastrophe models (computer simulations that estimate potential losses from natural perils). When they put all this together with detailed analysis of conditions within the area, they come up with a figure for their total potential loss if a catastrophic event strikes.
This is why reinsurers focus so intensely on climate change. Take a glance at the websites of big ones like Swiss Re and Munich Re and you get a sense of how central this is to their calculations – a concern that has spread to property insurers who are starting to hire climate consultants. Even more than market volatility, climate is their biggest headache. ‘You won’t meet a single insurance or reinsurance CEO who doesn’t believe in climate change,’ the insurance investor and former Lombard Insurance CEO James Orford told me. ‘They see it in the numbers – a combination of more extreme, less predictable events, combined with big losses of sums insured. All the modelling suggests these are uninsurable risks.’…
[Evans recaps the history of insurance, starting in Genoa, in the mid-14th century, with the insuring of maritime expeditions; examines the current state of play; examines the efforts (and gauges the weaknesses) of state’s efforts to step up with coverage when insurers step away; then considers another role for states…]
If states do withdraw from insurance and reinsurance, some of the most lucrative areas of the US, Canada, Europe, Asia, Africa and Australia will be devastated: no mortgages and no banks, leading to more ghost towns and villages. ‘It ends with depopulation and abandonment,’ said Agarwala. ‘Climate change reduces the operating space for humanity.’ In the UK, rising sea levels and coastal erosion could literally reduce operating space, putting 200,000 British homes at risk by 2050. There’s no coastal-erosion insurance, which puts more burden on the state, mainly to pay for new defences, but also to help people move.
Governments can take action in other ways, by investing greater sums in risk-prevention and management. There are signs of this happening such as the ‘fire-hardening’ and storm-prevention efforts in Florida, and improved flood defences in the UK; meanwhile, the EU’s Recovery and Resilience Facility is being used in several countries to build and renovate operations centres to cope with wildfires, and to buy firefighting helicopters.
In future, it is likely that voters will demand that their state and national governments do far more, regardless of the cost. They will want tougher building codes, including limitations on building in risky areas; expensive fire-prevention and fire-fighting schemes; better flood and storm defences; improved early catastrophe management, involving relocating people from risky areas and, when disaster strikes, rapid life-saving interventions such as large-scale emergency evacuations. If the insurance industry is forced to retreat by the climate crisis, all of this infrastructural investment will require vast chunks of taxpayers’ money. It is hard to avoid the feeling that this is part of our destiny, and that the sore thumb of the Florida peninsula is pointing us to the future…
Whole regions of the world are now uninsurable, bringing radical uncertainty to the economy: “The insurance catastrophe,” from @aeon.co.
See also: “An Uninsurable Country” (a report form NRDC), “The Insurance Crisis Is So Desperate People Are Turning Socialist” (a gift article from Bloomberg), and “The Uninsurable Future: The Climate Threat to Property Insurance, and How to Stop It” (from Yale Law Review)
* Isaac Asimov
###
As we cover up, we might send highly-charged birthday greetings to a man who made foundational contributions both to the detection of climatic conditions and to a technology that may help allieviate climate change: John Frederic Daniell was born on this date in 1790. Named the first professor of chemistry at the newly founded King’s College London in 1831, he was an avid meteorologist. He invented the dew-point hygrometer known by his name and a register pyrometer; in 1830 he erected a water-barometer in the hall of the Royal Society.
But Daniell is better remembered as a chemist (and physicist), especially for his invention of the Daniell cell, an element of an electric battery much better than voltaic cells, the standard before him. Indeed, the Daniell cell is the historical basis for the contemporary definition of the volt (the unit of electromotive force in the International System of Units). All advances in battery technology since then were “from” the base that Daniell laid.
“I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space.”*…
Physicists believe a third class of particles – anyons – could exist, but only in 2D. As Elay Shech asks, what kind of existence is that?…
Everything around you – from tables and trees to distant stars and the great diversity of animal and plant life – is built from a small set of elementary particles. According to established scientific theories, these particles fall into two basic and deeply distinct categories: bosons and fermions.
Bosons are sociable. They happily pile into the same quantum state, that is, the same combination of quantum properties such as energy level, like photons do when they form a laser. Fermions, by contrast, are the introverts of the particle world. They flat out refuse to share a quantum state with one another. This reclusive behaviour is what forces electrons to arrange themselves in layered atomic shells, ultimately giving rise to the structure of the periodic table and the rich chemistry it enables.
At least, that’s what we assumed. In recent years, evidence has been accumulating for a third class of particles called ‘anyons’. Their name, coined by the Nobel laureate Frank Wilczek, gestures playfully at their refusal to fit into the standard binary of bosons and fermions – for anyons, anything goes. If confirmed, anyons wouldn’t just add a new member to the particle zoo. They would constitute an entirely novel category – a new genus – that rewrites the rules for how particles move, interact, and combine. And those strange rules might one day engender new technologies.
Although none of the elementary particles that physicists have detected are anyons, it is possible to engineer environments that give rise to them and potentially harness their power. We now think that some anyons wind around one another, weaving paths that store information in a way that’s unusually hard to disturb. That makes them promising candidates for building quantum computers – machines that could revolutionise fields like drug discovery, materials science, and cryptography. Unlike today’s quantum systems that are easily disturbed, anyon-based designs may offer built-in protection and show real promise as building blocks for tomorrow’s computers.
Philosophically, however, there’s a wrinkle in the story. The theoretical foundations make it clear that anyons are possible only in two dimensions, yet we inhabit a three-dimensional world. That makes them seem, in a sense, like fictions. When scientists seek to explore the behaviours of complicated systems, they use what philosophers call ‘idealisations’, which can reveal underlying patterns by stripping away messy real-world details. But these idealisations may also mislead. If a scientific prediction depends entirely on simplification – if it vanishes the moment we take the idealisation away – that’s a warning sign that something has gone wrong in our analysis.
So, if anyons are possible only through two-dimensional idealisations, what kind of reality do they actually possess? Are they fundamental constituents of nature, emergent patterns, or something in between? Answering these questions means venturing into the quantum world, beyond the familiar classes of particles, climbing among the loops and holes of topology, detouring into the strange physics of two-dimensional flatland – and embracing the idea that apparently idealised fictions can reveal deeper truths…
[Shech explains anyons, and considers the various strategies for making sense of them. (They”paraparticles” like anyons don’t actually exit. Or we simply lack the theoretical framwork and experimental work to follow to find them. Or in ultra-thin materials physics, we’ve already found them.) Considering the latter two possibilities, he concludes…]
So, if anyons exist, what kind of existence is it? None of the elementary particles are anyons. Instead, physicists appeal to the notion of ‘quasiparticles’, in which large numbers of electrons or atoms interact in complex ways and behave, collectively, like a simpler object you can track with novel behaviours.
Picture fans doing ‘the wave’ in a stadium. The wave travels around the arena as if it’s a single thing, even though it’s really just people standing and sitting in sequence. In a solid, the coordinated motion of many particles can act the same way – forming a ripple or disturbance that moves as if it were its own particle. Sometimes, the disturbance centres on an individual particle, like an electron trying to move through a material. As it bumps into nearby atoms and other electrons, they push back, creating a kind of ‘cloud’ around it. The electron plus its cloud behave like a single, heavier, slower particle with new properties. That whole package is also treated as a quasiparticle.
Some quasiparticles behave like bosons or fermions. But for others, when two of them trade places, the system’s quantum state picks up a built-in marker that isn’t limited to the two familiar settings. It can take on intermediate values, which means novel quantum statistics. If the theories describing these systems are right, then the quasiparticles in question aren’t just behaving oddly, they are anyons: the third type of particles.
In other words, while none of the elementary particles that physicists have detected are anyons – physicists have never ‘seen’ an anyon in isolation – we can engineer environments that give rise to emergent quasiparticles portraying the quantum statistics of anyons. In this sense, anyons have been experimentally confirmed. But there are different kinds of anyons, and there is still active work being done on the more exotic anyons that we hope to harness for quantum computers.
But even so, are quasiparticles, like anyons, really real? That depends. Some philosophers argue that existence depends on scale. Zoom in close enough, and it makes little sense to talk about tables or trees – those objects show up only at the human scale. In the same way, some particles exist only in certain settings. Anyons don’t appear in the most fundamental theories, but they show up in thin, flat systems where they are the stable patterns that help explain real, measurable effects. From this point of view, they’re as real as anything else we use to explain the world.
Others take a more radical stance. They argue that quasiparticles, fields and even elementary particles aren’t truly real: they’re just useful labels. What really exists is not stuff but structure: relations and patterns. So ‘anyons’ are one way we track the relevant structure when a system is effectively two-dimensional.
Questions about reality take us deep into philosophy, but they also open the door to a broader enquiry: what does the story of anyons reveal about the role of idealisations and fictions in science? Why bother playing in flatland at all?
Often, idealisations are seen as nothing more than shortcuts. They strip away details to make the mathematics manageable, or serve as teaching tools to highlight the essentials, but they aren’t thought to play a substantive role in science. On this view, they’re conveniences, not engines of discovery.
But the story of anyons shows that idealisations can do far more. They open up new possibilities, sharpen our understanding of theory, clarify what a phenomenon is supposed to be in the first place, and sometimes even point the way to new science and engineering.
The first payoff is possibility: idealisation lets us explore a theory’s ‘what ifs’, the range of behaviours it allows even if the world doesn’t exactly realise them. When we move to two dimensions, quantum mechanics suddenly permits a new kind of particle choreography. Not just a simple swap, but wind-and-weave novel rules for how particles can combine and interact. Thinking in this strictly two-dimensional setting is not a parlour trick. It’s a way to see what the theory itself makes possible.
That same detour through flatland also assists us in understanding the theory better. Idealised cases turn up the contrast knobs. In three dimensions, particle exchanges blur into just two familiar options of bosons and fermions. In two dimensions, the picture sharpens. By simplifying the world, the idealisation makes the theory’s structure visible to the naked eye.
Idealisation also helps us pin down what a phenomenon really is. It separates difference-makers from distractions. In the anyon case, the flat setting reveals what would count as a genuine signature, say, a lasting memory of the winding of particles, and what would be a mere lookalike that ordinary bosons or fermions could mimic. It also highlights contrasts with other theoretical possibilities: paraparticles, for example, don’t depend on a two-dimensional world, but anyons seem to. That contrast helps identify what belongs to the essence of anyons and what does not. When we return to real materials, we know what to look for and what to ignore.
Finally, idealisations don’t just help us read a theory – they help write the next one. If experiments keep turning up signatures that seem to exist only in flatland, then what began as an idealisation becomes a compass for discovery. A future theory must build that behaviour into its structure as a genuine, non-idealised possibility. Sometimes, that means showing how real materials effectively enforce the ideal constraint, such as true two-dimensionality. Other times, it means uncovering a new mechanism that reproduces the same exchange behaviour without the fragile assumptions of perfect flatness. In both cases, idealisation serves as a guide for theory-building. It tells us which features must survive, which can bend, and where to look for the next, more general theory.
So, when we venture into flatland to study anyons, we’re not just simplifying – we’re exploring the boundaries where mathematics, matter and reality meet. The journey from fiction to fact may be strange, but it’s also how science moves forward…
Eminently worth reading in full: “Playing in flatland,” from @elayshech.bsky.social in @aeon.co.
Pair with: “Is Particle Physics Dead, Dying, or Just Hard?“
* Edwin A. Abbott, Flatland: A Romance of Many Dimensions
###
As we brood over the boundaries of “being” (and knowing), we might spare a thought for Bertand Russell; he died on this date in 1970. A philosopher, logician, mathematician, and public intellectual, he influenced mathematics, logic, and several areas of analytic philosophy.
He was one of the early 20th century’s prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British “revolt against idealism“. Together with his former teacher Alfred North Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt [if ultimately unsuccessful, pace Godel] to reduce the whole of mathematics to logic. Russell’s article “On Denoting” is considered a “paradigm of philosophy.”
“I love to talk about nothing. It’s the only thing I know anything about.”*…

Try as they might, scientists can’t truly rid a space or an object of its energy. But as George Musser reports, what “zero-point energy” really means is up for interpretation…
Suppose you want to empty a box. Really, truly empty it. You remove all its visible contents, pump out any gases, and — applying some science-fiction technology — evacuate any unseeable material such as dark matter. According to quantum mechanics, what’s left inside?
It sounds like a trick question. And in quantum mechanics, you know to expect a trick answer. Not only is the box still filled with energy, but all your efforts to empty it have barely put a dent in the amount.
This unavoidable residue is known as ground-state energy, or zero-point energy. It comes in two basic forms: The one in the box is associated with fields, such as the electromagnetic field, and the other is associated with discrete objects, such as atoms and molecules. You may dampen a field’s vibrations, but you cannot eliminate every trace of its presence. And atoms and molecules retain energy even if they’re cooled arbitrarily close to absolute zero. In both cases, the underlying physics is the same.
Zero-point energy is characteristic of any material structure or object that is at least partly confined, such as an atom held by electric fields in a molecule. The situation is like that of a ball that has settled at the bottom of a valley. The total energy of the ball consists of its potential energy (related to position) plus its kinetic energy (related to motion). To zero out both components, you would have to give a precise value to both the object’s position and its velocity, something forbidden by the Heisenberg uncertainty principle.
What the existence of zero-point energy tells you at a deeper level depends ultimately on which interpretation of quantum mechanics you adopt. The only noncontentious thing you can say is that, if you situate a bunch of particles in their lowest energy state and measure their positions or velocities, you will observe a spread of values. Despite being drained of energy, the particles will look as if they’ve been jiggling. In some interpretations of quantum mechanics, they really have been. But in others, the appearance of motion is a misleading holdover from classical physics, and there is no intuitive way to picture what’s happening…
More on the development of our understanding of “zero-point energy” and on the questions that remain: “In Quantum Mechanics, Nothingness Is the Potential To Be Anything,” from @georgemusser.com in @quantamagazine.bsky.social.
For the most amusing of musings on nothing, see Percival Everett‘s Dr. No.
* Oscar Wilde
###
As we noodle on nought, we might spare a thought for Kurt Gödel; he died on this date in 1978. A mathematician, logician, and author of Gödel’s proof. He is best known for his proof of Gödel’s Incompleteness Theorems (in 1931). He proved fundamental that in any axiomatic mathematical system there are propositions that cannot be proved or disproved within the axioms of the system. In particular, the consistency of the axioms cannot be proved… thus ending a hundred years of attempts to establish axioms to put the whole of mathematics on an axiomatic basis. [See here for a consideration of what his finding might mean for moral philosophy…]








You must be logged in to post a comment.