Posts Tagged ‘logic’
“I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space.”*…
Physicists believe a third class of particles – anyons – could exist, but only in 2D. As Elay Shech asks, what kind of existence is that?…
Everything around you – from tables and trees to distant stars and the great diversity of animal and plant life – is built from a small set of elementary particles. According to established scientific theories, these particles fall into two basic and deeply distinct categories: bosons and fermions.
Bosons are sociable. They happily pile into the same quantum state, that is, the same combination of quantum properties such as energy level, like photons do when they form a laser. Fermions, by contrast, are the introverts of the particle world. They flat out refuse to share a quantum state with one another. This reclusive behaviour is what forces electrons to arrange themselves in layered atomic shells, ultimately giving rise to the structure of the periodic table and the rich chemistry it enables.
At least, that’s what we assumed. In recent years, evidence has been accumulating for a third class of particles called ‘anyons’. Their name, coined by the Nobel laureate Frank Wilczek, gestures playfully at their refusal to fit into the standard binary of bosons and fermions – for anyons, anything goes. If confirmed, anyons wouldn’t just add a new member to the particle zoo. They would constitute an entirely novel category – a new genus – that rewrites the rules for how particles move, interact, and combine. And those strange rules might one day engender new technologies.
Although none of the elementary particles that physicists have detected are anyons, it is possible to engineer environments that give rise to them and potentially harness their power. We now think that some anyons wind around one another, weaving paths that store information in a way that’s unusually hard to disturb. That makes them promising candidates for building quantum computers – machines that could revolutionise fields like drug discovery, materials science, and cryptography. Unlike today’s quantum systems that are easily disturbed, anyon-based designs may offer built-in protection and show real promise as building blocks for tomorrow’s computers.
Philosophically, however, there’s a wrinkle in the story. The theoretical foundations make it clear that anyons are possible only in two dimensions, yet we inhabit a three-dimensional world. That makes them seem, in a sense, like fictions. When scientists seek to explore the behaviours of complicated systems, they use what philosophers call ‘idealisations’, which can reveal underlying patterns by stripping away messy real-world details. But these idealisations may also mislead. If a scientific prediction depends entirely on simplification – if it vanishes the moment we take the idealisation away – that’s a warning sign that something has gone wrong in our analysis.
So, if anyons are possible only through two-dimensional idealisations, what kind of reality do they actually possess? Are they fundamental constituents of nature, emergent patterns, or something in between? Answering these questions means venturing into the quantum world, beyond the familiar classes of particles, climbing among the loops and holes of topology, detouring into the strange physics of two-dimensional flatland – and embracing the idea that apparently idealised fictions can reveal deeper truths…
[Shech explains anyons, and considers the various strategies for making sense of them. (They”paraparticles” like anyons don’t actually exit. Or we simply lack the theoretical framwork and experimental work to follow to find them. Or in ultra-thin materials physics, we’ve already found them.) Considering the latter two possibilities, he concludes…]
So, if anyons exist, what kind of existence is it? None of the elementary particles are anyons. Instead, physicists appeal to the notion of ‘quasiparticles’, in which large numbers of electrons or atoms interact in complex ways and behave, collectively, like a simpler object you can track with novel behaviours.
Picture fans doing ‘the wave’ in a stadium. The wave travels around the arena as if it’s a single thing, even though it’s really just people standing and sitting in sequence. In a solid, the coordinated motion of many particles can act the same way – forming a ripple or disturbance that moves as if it were its own particle. Sometimes, the disturbance centres on an individual particle, like an electron trying to move through a material. As it bumps into nearby atoms and other electrons, they push back, creating a kind of ‘cloud’ around it. The electron plus its cloud behave like a single, heavier, slower particle with new properties. That whole package is also treated as a quasiparticle.
Some quasiparticles behave like bosons or fermions. But for others, when two of them trade places, the system’s quantum state picks up a built-in marker that isn’t limited to the two familiar settings. It can take on intermediate values, which means novel quantum statistics. If the theories describing these systems are right, then the quasiparticles in question aren’t just behaving oddly, they are anyons: the third type of particles.
In other words, while none of the elementary particles that physicists have detected are anyons – physicists have never ‘seen’ an anyon in isolation – we can engineer environments that give rise to emergent quasiparticles portraying the quantum statistics of anyons. In this sense, anyons have been experimentally confirmed. But there are different kinds of anyons, and there is still active work being done on the more exotic anyons that we hope to harness for quantum computers.
But even so, are quasiparticles, like anyons, really real? That depends. Some philosophers argue that existence depends on scale. Zoom in close enough, and it makes little sense to talk about tables or trees – those objects show up only at the human scale. In the same way, some particles exist only in certain settings. Anyons don’t appear in the most fundamental theories, but they show up in thin, flat systems where they are the stable patterns that help explain real, measurable effects. From this point of view, they’re as real as anything else we use to explain the world.
Others take a more radical stance. They argue that quasiparticles, fields and even elementary particles aren’t truly real: they’re just useful labels. What really exists is not stuff but structure: relations and patterns. So ‘anyons’ are one way we track the relevant structure when a system is effectively two-dimensional.
Questions about reality take us deep into philosophy, but they also open the door to a broader enquiry: what does the story of anyons reveal about the role of idealisations and fictions in science? Why bother playing in flatland at all?
Often, idealisations are seen as nothing more than shortcuts. They strip away details to make the mathematics manageable, or serve as teaching tools to highlight the essentials, but they aren’t thought to play a substantive role in science. On this view, they’re conveniences, not engines of discovery.
But the story of anyons shows that idealisations can do far more. They open up new possibilities, sharpen our understanding of theory, clarify what a phenomenon is supposed to be in the first place, and sometimes even point the way to new science and engineering.
The first payoff is possibility: idealisation lets us explore a theory’s ‘what ifs’, the range of behaviours it allows even if the world doesn’t exactly realise them. When we move to two dimensions, quantum mechanics suddenly permits a new kind of particle choreography. Not just a simple swap, but wind-and-weave novel rules for how particles can combine and interact. Thinking in this strictly two-dimensional setting is not a parlour trick. It’s a way to see what the theory itself makes possible.
That same detour through flatland also assists us in understanding the theory better. Idealised cases turn up the contrast knobs. In three dimensions, particle exchanges blur into just two familiar options of bosons and fermions. In two dimensions, the picture sharpens. By simplifying the world, the idealisation makes the theory’s structure visible to the naked eye.
Idealisation also helps us pin down what a phenomenon really is. It separates difference-makers from distractions. In the anyon case, the flat setting reveals what would count as a genuine signature, say, a lasting memory of the winding of particles, and what would be a mere lookalike that ordinary bosons or fermions could mimic. It also highlights contrasts with other theoretical possibilities: paraparticles, for example, don’t depend on a two-dimensional world, but anyons seem to. That contrast helps identify what belongs to the essence of anyons and what does not. When we return to real materials, we know what to look for and what to ignore.
Finally, idealisations don’t just help us read a theory – they help write the next one. If experiments keep turning up signatures that seem to exist only in flatland, then what began as an idealisation becomes a compass for discovery. A future theory must build that behaviour into its structure as a genuine, non-idealised possibility. Sometimes, that means showing how real materials effectively enforce the ideal constraint, such as true two-dimensionality. Other times, it means uncovering a new mechanism that reproduces the same exchange behaviour without the fragile assumptions of perfect flatness. In both cases, idealisation serves as a guide for theory-building. It tells us which features must survive, which can bend, and where to look for the next, more general theory.
So, when we venture into flatland to study anyons, we’re not just simplifying – we’re exploring the boundaries where mathematics, matter and reality meet. The journey from fiction to fact may be strange, but it’s also how science moves forward…
Eminently worth reading in full: “Playing in flatland,” from @elayshech.bsky.social in @aeon.co.
Pair with: “Is Particle Physics Dead, Dying, or Just Hard?“
* Edwin A. Abbott, Flatland: A Romance of Many Dimensions
###
As we brood over the boundaries of “being” (and knowing), we might spare a thought for Bertand Russell; he died on this date in 1970. A philosopher, logician, mathematician, and public intellectual, he influenced mathematics, logic, and several areas of analytic philosophy.
He was one of the early 20th century’s prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British “revolt against idealism“. Together with his former teacher Alfred North Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt [if ultimately unsuccessful, pace Godel] to reduce the whole of mathematics to logic. Russell’s article “On Denoting” is considered a “paradigm of philosophy.”
“I love to talk about nothing. It’s the only thing I know anything about.”*…

Try as they might, scientists can’t truly rid a space or an object of its energy. But as George Musser reports, what “zero-point energy” really means is up for interpretation…
Suppose you want to empty a box. Really, truly empty it. You remove all its visible contents, pump out any gases, and — applying some science-fiction technology — evacuate any unseeable material such as dark matter. According to quantum mechanics, what’s left inside?
It sounds like a trick question. And in quantum mechanics, you know to expect a trick answer. Not only is the box still filled with energy, but all your efforts to empty it have barely put a dent in the amount.
This unavoidable residue is known as ground-state energy, or zero-point energy. It comes in two basic forms: The one in the box is associated with fields, such as the electromagnetic field, and the other is associated with discrete objects, such as atoms and molecules. You may dampen a field’s vibrations, but you cannot eliminate every trace of its presence. And atoms and molecules retain energy even if they’re cooled arbitrarily close to absolute zero. In both cases, the underlying physics is the same.
Zero-point energy is characteristic of any material structure or object that is at least partly confined, such as an atom held by electric fields in a molecule. The situation is like that of a ball that has settled at the bottom of a valley. The total energy of the ball consists of its potential energy (related to position) plus its kinetic energy (related to motion). To zero out both components, you would have to give a precise value to both the object’s position and its velocity, something forbidden by the Heisenberg uncertainty principle.
What the existence of zero-point energy tells you at a deeper level depends ultimately on which interpretation of quantum mechanics you adopt. The only noncontentious thing you can say is that, if you situate a bunch of particles in their lowest energy state and measure their positions or velocities, you will observe a spread of values. Despite being drained of energy, the particles will look as if they’ve been jiggling. In some interpretations of quantum mechanics, they really have been. But in others, the appearance of motion is a misleading holdover from classical physics, and there is no intuitive way to picture what’s happening…
More on the development of our understanding of “zero-point energy” and on the questions that remain: “In Quantum Mechanics, Nothingness Is the Potential To Be Anything,” from @georgemusser.com in @quantamagazine.bsky.social.
For the most amusing of musings on nothing, see Percival Everett‘s Dr. No.
* Oscar Wilde
###
As we noodle on nought, we might spare a thought for Kurt Gödel; he died on this date in 1978. A mathematician, logician, and author of Gödel’s proof. He is best known for his proof of Gödel’s Incompleteness Theorems (in 1931). He proved fundamental that in any axiomatic mathematical system there are propositions that cannot be proved or disproved within the axioms of the system. In particular, the consistency of the axioms cannot be proved… thus ending a hundred years of attempts to establish axioms to put the whole of mathematics on an axiomatic basis. [See here for a consideration of what his finding might mean for moral philosophy…]
“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…
Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…
Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.
Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.
To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).
Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.
To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.
In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…
… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.
We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.
I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.
But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…
While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.
This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.
The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…
… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.
But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.
With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).
Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.
So then what’s the take-away?
For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:
It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.
What a lovely new concern for parents we’ve invented!
Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.
Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.
For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.
In other words, pro-human bias and AI distrust are cognitively beneficial.
It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.
The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.
Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?“
* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b
###
As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861. Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.
But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).
“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”
“Chance, too, which seems to rush along with slack reins, is bridled and governed by law”*…
… though that law can sometimes be less than obvious. Erica Klarreich reports on one creative mathematician’s efforts to help us learn…
In late January, Daniel Litt [pictured above] posed an innocent probability puzzle on the social media platform X (formerly known as Twitter) — and set a corner of the Twitterverse on fire.
Imagine, he wrote, that you have an urn filled with 100 balls, some red and some green. You can’t see inside; all you know is that someone determined the number of red balls by picking a number between zero and 100 from a hat. You reach into the urn and pull out a ball. It’s red. If you now pull out a second ball, is it more likely to be red or green (or are the two colors equally likely)?
Of the tens of thousands of people who voted on an answer to Litt’s problem, only about 22% chose correctly. (We’ll reveal the solution below, in case you want to think it over first.) In the months since, Litt, a mathematician at the University of Toronto, has continued to confound Twitter users with a series of probability puzzles about urns and coin tosses.
His posts have prompted lively online discussions among research mathematicians, computer scientists and economists — as well as philosophers, financiers, sports analysts and anonymous fans. Some joked that the puzzles were distracting them from their real work — “actively slowing down economic research,” as one economist put it. Others have posted papers exploring the puzzles’ mathematical ramifications.
Litt’s online project doesn’t just highlight the enduring allure of brainteasers. It also demonstrates the limits of our mathematical intuition, and the counterintuitive nature of probabilistic reasoning. As Litt wrote, there’s “nothing more exhilarating than posing a multiple-choice problem on which 50,000 people do substantially worse than random chance.”…
The answer to this puzzle, other puzzles, and Litt on what makes a great puzzle, and why simple probability questions can be so deceptively difficult: “Perplexing the Web, One Probability Puzzle at a Time,” from @EricaKlarreich in @QuantaMagazine.
Vaguely related (but also very interesting): “The Bookmaker,” via @annfriedman, who observes: “Leif Weatherby and Ben Recht on Nate Silver and the addiction to prediction: ‘Silver insists that viewing all decisions through this lens of gambling is the underappreciated characteristic of Very Successful People,’ they write. ‘But what Silver willfully ignores is that the successful players in this world aren’t the bettors. They are the bookies and casino owners—the house that never loses.'”
* Boethius, The Consolation of Philosophy
###
As we contemplate chance, we might send confirmatory birthday greetings to Carl David Anderson; he was born on this date in 1905. An experimental physicist, he shared the 1936 Nobel Prize in Physics for his discovery (that’s to say, confirmation of the existence) of the positron, the first known particle of antimatter… which had been predicted by mathematician and physicist Paul Dirac, whose “Dirac Equation“– in part a product of its author’s application of probability theory– had predicted (among many other features of quantum theory as we know it) the existence of the particle (and antimatter).










You must be logged in to post a comment.