Posts Tagged ‘neuroscience’
“The brain has corridors surpassing / Material place…”*
Our brains, Luiz Pessoa suggests, are much less like machines than they are like the murmurations of a flock of starlings or an orchestral symphony…
When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society.
Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers.
These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works.
The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking…
Complexity, emergence, and consciousness: “The entangled brain” from @aeon.co. Read on for the provocative details.
* Emily Dickinson
###
As we think about thinking, we might send amibivalent birthday greetings to Robert Yerkes; he was born on this date in 1876. A psychologist, ethnologist, and primatologist, he is best remembered as a principal developer of comparative (animal) psychology in the U.S. (his book The Dancing Mouse (1908), helped established the use of mice and rats as standard subjects for experiments in psychology) and for his work in intelligence testing.
But in his later life, Yerkes began to broadcast his support for eugenics. These views are broadly considered specious– based on outmoded/incorrect racialist theories— by modern academics.
“I love to talk about nothing. It’s the only thing I know anything about.”*…
It took centuries for people to embrace the zero. Now, as Benjy Barnett explains, it’s helping neuroscientists understand how the brain perceives absences…
When I’m birdwatching, I have a particular experience all too frequently. Fellow birders will point to the tree canopy and ask if I can see a bird hidden among the leaves. I scan the treetops with binoculars but, to everyone’s annoyance, I see only the absence of a bird.
Our mental worlds are lively with such experiences of absence, yet it’s a mystery how the mind performs the trick of seeing nothing. How can the brain perceive something when there is no something to perceive?
For a neuroscientist interested in consciousness, this is an alluring question. Studying the neural basis of ‘nothing’ does, however, pose obvious challenges. Fortunately, there are other – more tangible – kinds of absences that help us get a handle on the hazy issue of nothingness in the brain. That’s why I spent much of my PhD studying how we perceive the number zero.
Zero has played an intriguing role in the development of our societies. Throughout human history, it has floundered in civilisations fearful of nothingness, and flourished in those that embraced it. But that’s not the only reason it’s so beguiling. In striking similarity to the perception of absence, zero’s representation as a number in the brain also remains unclear. If my brain has specialised mechanisms that have evolved to count the owls perched on a branch, how does this system abstract away from what’s visible, and signal that there are no owls to count?
The mystery shared between the perception of absences and the conception of zero may not be coincidental. When your brain recognises zero, it may be recruiting fundamental sensory mechanisms that govern when you can – and cannot – see something. If this is the case, theories of consciousness that emphasise the experience of absence may find a new use for zero, as a tool with which to explore the nature of consciousness itself…
[Barnett provides a fascinating history of the zero, of its uses, and of brain scientost’s attepts to understand the (not so masterful) human ability to perceive absence…]
… All of this returns us to zero. The question is, does the same underlying neural mechanism drive experiences of both zero and perceptual absence? If it does, this would show us that, when we’re engaged in mathematics using zero, we’re also invoking a more fundamental and automatic cognitive system – one that is, for instance, responsible for detecting an absence of birds when I’m birdwatching.
The brain systems used to extract positive numbers from the environment are relatively well understood. Parts of the parietal cortex have evolved to represent the number of ‘things’ in our environment while stripping away information of what those ‘things’ are. This system would simply indicate ‘four’ if I saw four owls, for example. It is thought to be central to learning the structure of our environment. If the neural systems that govern our ability to decide if we consciously see something or not were found to rely on this same mechanism, it would help theories like HOSS and PRM get a handle on how exactly this ability arises. Perhaps, just as this system learns the structure and regularities of our environment, it also learns the structure of our brain’s sensory activity to help determine when we have seen something. This is what PRM and HOSS already predict, but grounding the theories in established ideas about how the brain works may provide them with a stronger foothold in explaining the precise mechanisms that allow us to become aware of the world.
An intriguing hypothesis inspired by the ideas above is that, if the brain basis of zero relies on the kinds of absence-related neural mechanisms that the above frameworks take to be necessary for conscious experience, then for any organism to successfully employ the concept of zero, it might first need to be perceptually conscious. This would mean that understanding zero could act as a marker for consciousness. Given that even honeybees have been shown to enjoy a rudimentary concept of zero, this may seem – at least to some – far fetched. Nonetheless, it seems attractive to suggest that the similarities between numerical and perceptual absences could help reveal the neural basis of not only experiences of absence but conscious awareness more broadly. Jean-Paul Sartre testified that nothingness was at the heart of being, after all.
The evolution of the number zero helped unlock the secrets of the cosmos. It remains to be seen whether it can help to unpick the mysteries of the mind. For now, studying it has at least led to less disappointment about my birdwatching failures. Now I know that there’s great complexity in seeing nothing and that, more importantly, nothing really matters…
Noodling on nowt: “Why nothing matters,” from @benjyb.bsky.social in @aeon.co.
Apposite: Percival Everett‘s glorious novel, Dr. No.
* Oscar Wilde
###
As we analyze our apprehension of absence, we might send empty bithday greetings to a man who ruled out the use of “0” in one specific case: Georg Ohm; he was born on this date in 1789. A mathematician and physicist, he demonstrated by experiment (in 1825) that there are no “perfect” electrical conductors– that’s to say, no conductors with 0 resistance.
Working with the new electrochemical cell, invented by Italian scientist Alessandro Volta, Ohm found that there is a direct proportionality between the potential difference (voltage) applied across a conductor and the resultant electric current— a relationship since known as Ohm’s law (V=iR). The SI unit of resistance is the ohm (symbol Ω).
“Even though our lives wander, our memories remain in one place”*…
Your correspondent’s fascination with the “memory palace,” the age-old technique of memorization, has shown up in (R)D many times before (e.g., here, here, here, here, here, and here :) That it works has been long understood– but how it works, not so much. Ingrid Wickelgren reports on research that may offer a clue…
After shuffling the cards in a standard 52-card deck, Alex Mullen, a three-time world memory champion, can memorize their order in under 20 seconds. As he flips though the cards, he takes a mental walk through a house. At each point in his journey — the mailbox, front door, staircase and so on — he attaches a card. To recall the cards, he relives the trip.
This technique, called “method of loci” or “memory palace,” is effective because it mirrors the way the brain naturally constructs narrative memories: Mullen’s memory for the card order is built on the scaffold of a familiar journey. We all do something similar every day, as we use familiar sequences of events, such as the repeated steps that unfold during a meal at a restaurant or a trip through the airport, as a home for specific details — an exceptional appetizer or an object flagged at security. The general narrative makes the noteworthy features easier to recall later.
“You are taking these details and connecting them to this prior knowledge,” said Christopher Baldassano, a cognitive neuroscientist at Columbia University. “We think this is how you create your autobiographical memories.”
Psychologists empirically introduced this theory some 50 years ago, but proof of such scaffolds in the brain was missing. Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. He believes that the brain builds a rich library of scripts for expected scenarios — restaurant or airport, business deal or marriage proposal — over a person’s lifetime.
These standardized scripts, and departures from them, influence how and how well we remember specific instances of these event types, his lab has found. And recently, in a paper published in Current Biology in fall 2024, they showed that individuals can select a dominant script for a complex, real-world event — for example, while watching a marriage proposal in a restaurant, we might opt, subconsciously, for either a proposal or a restaurant script — which determines what details we remember…
The fascinating details of how, by screening films in a brain scanner, neuroscientists discovered a rich library of neural scripts — from a trip through an airport to a marriage proposal — that form scaffolds for memories of our experiences: “How ‘Event Scripts’ Structure Our Personal Memories,” from @iwickelgren in @quantamagazine.bsky.social.
* Marcel Proust
###
As we remember (and lest we forget), we might recall that it was on this date in 1920 that Adolf Hitler, the propaganda head of the German Worker’s Party (DAP) gave a speech (now known as “Hitler’s Hofbräuhaus speech”) to 2,000 followers at a Munich beer hall announcing the change in the party’s name to the Nationalsozialistische Deutsche Arbeiterpartei (“National Socialist German Workers’ Party”, or Nazi Party). It was then that the party officially announced that only persons of “pure Aryan descent” could become members and that their spouses had to be “racially pure” as well.
Oh, and on this date in 1868, an American President (Andrew Johnson) was impeached for the first time.
“The world of reality has its limits; the world of imagination is boundless”*…
Still, it’s useful to know the difference… and as Yasemin Saplakoglu explains, that’s a complex process– one that science takes very seriously…
As I sit at my desk typing up this newsletter, I can see a plant to my left, a water bottle to my right and a gorilla sitting across from me. The plant and bottle are real, but the gorilla is a product of my mind — and I intuitively know that this is true. That’s because my brain, like most people’s, has the ability to distinguish reality from imagination. If it didn’t, or if I had a condition that disrupts this distinction, I’d constantly see gorillas and elephants where they don’t exist.
Imagination is sometimes described as perception in reverse. When we look at an object, electromagnetic waves enter the eyes, where they are translated into neural signals that are then sent to the visual cortex at the back of the brain. This process generates an image: “plant.” With imagination, we start with what we want to see, and the brain’s memory and semantic centers send signals to the same brain region: “gorilla.”
In both cases, the visual cortex is activated. Recalling memories can also activate some of the same regions. Yet the brain can clearly distinguish between imagination, perception and memory in most cases (though it is still possible to get confused). How does it keep everything straight?
By probing the differences between these processes, neuroscientists are untangling how the human brain creates our experience. They’re finding that even our perception of reality is in many ways imagined. “Underneath our skull, everything is made up,” Lars Muckli, a professor of visual and cognitive neurosciences at the University of Glasgow, told me. “We entirely construct the world in its richness and detail and color and sound and content and excitement. … It is created by our neurons.”
To distinguish reality and imagination, the brain might have some kind of “reality threshold,” according to one theory. Researchers recently tested this by asking people to imagine specific images against a backdrop — and then secretly projected faint outlines of those images there. Participants typically recognized when they saw a real projection versus their imagined one, and those who rated images as more vivid were also more likely to identify them as real. The study suggested that when processing images, the brain might make a judgment on reality based on signal strength. If the signal is weak, the brain takes it for imagination. If it’s strong, the brain deems it real. “The brain has this really careful balancing act that it has to perform,” Thomas Naselaris, a neuroscientist at the University of Minnesota, told me. “In some sense it is going to interpret mental imagery as literally as it does visual imagery.”
Although recalling memories is a creative and imaginative process, it activates the visual cortex as if we were seeing. “It started to raise the question of whether a memory representation is actually different from a perceptual representation at all,” Sam Ling, a neuroscientist at Boston University, told me. A recent study looked to identify how memories and perceptions are constructed differently at the neurobiological level. When we perceive something, visual cues undergo layers of processing in the visual cortex that increase in complexity. Neurons in earlier parts of this process fire more precisely than those that get involved later. In the study, researchers found that during memory recall, neurons fired in a much blurrier way through all the layers. That might explain why our memories aren’t often as crisp as what we’re seeing in front of us…
“How Do Brains Tell Reality From Imagination?” from @yaseminsaplakoglu.bsky.social in @quantamagazine.bsky.social.
* Jean-Jacques Rousseau
###
As we parse perception, we might send mindful birthday greetings to a man whose work figures into the history of science’s struggle on this issue, Franz Brentano; he was born on this date in 1838. A philosopher and psychologist, his 1874 Psychology from an Empirical Standpoint, considered his magnum opus and is credited with having reintroduced the medieval scholastic concept of intentionality into contemporary philosophy and psychology.
Brentano also studied perception, with conclusions that prefigure the discussion above…
He is also well known for claiming that Wahrnehmung ist Falschnehmung (‘perception is misconception’) that is to say perception is erroneous. In fact he maintained that external, sensory perception could not tell us anything about the de facto existence of the perceived world, which could simply be illusion. However, we can be absolutely sure of our internal perception. When I hear a tone, I cannot be completely sure that there is a tone in the real world, but I am absolutely certain that I do hear. This awareness, of the fact that I hear, is called internal perception. External perception, sensory perception, can only yield hypotheses about the perceived world, but not truth. Hence he and many of his pupils (in particular Carl Stumpf and Edmund Husserl) thought that the natural sciences could only yield hypotheses and never universal, absolute truths as in pure logic or mathematics.
However, in a reprinting of his Psychologie vom Empirischen Standpunkte (Psychology from an Empirical Standpoint), he recanted this previous view. He attempted to do so without reworking the previous arguments within that work, but it has been said that he was wholly unsuccessful. The new view states that when we hear a sound, we hear something from the external world; there are no physical phenomena of internal perception… – source
“Zero is powerful because it is infinity’s twin. They are equal and opposite, yin and yang.”*…

… and like infinity, zero can be a cognitive challenge. Yasemin Saplakoglu explains…
Around 2,500 years ago, Babylonian traders in Mesopotamia impressed two slanted wedges into clay tablets. The shapes represented a placeholder digit, squeezed between others, to distinguish numbers such as 50, 505 and 5,005. An elementary version of the concept of zero was born.
Hundreds of years later, in seventh-century India, zero took on a new identity. No longer a placeholder, the digit acquired a value and found its place on the number line, before 1. Its invention went on to spark historic advances in science and technology. From zero sprang the laws of the universe, number theory and modern mathematics.
“Zero is, by many mathematicians, definitely considered one of the greatest — or maybe the greatest — achievement of mankind,” said the neuroscientist Andreas Nieder, who studies animal and human intelligence at the University of Tübingen in Germany. “It took an eternity until mathematicians finally invented zero as a number.”
Perhaps that’s no surprise given that the concept can be difficult for the brain to grasp. It takes children longer to understand and use zero than other numbers, and it takes adults longer to read it than other small numbers. That’s because to understand zero, our mind must create something out of nothing. It must recognize absence as a mathematical object.
“It’s like an extra level of abstraction away from the world around you,” said Benjy Barnett, who is completing graduate work on consciousness at University College London. Nonzero numbers map onto countable objects in the environment: three chairs, each with four legs, at one table. With zero, he said, “we have to go one step further and say, ‘OK, there wasn’t anything there. Therefore, there must be zero of them.’”
In recent years, research started to uncover how the human brain represents numbers, but no one examined how it handles zero. Now two independent studies, led by Nieder and Barnett, respectively, have shown that the brain codes for zero much as it does for other numbers, on a mental number line. But, one of the studies found, zero also holds a special status in the brain…
Read on to find out the ways in which new studies are uncovering how the mind creates something out of nothing: “How the Human Brain Contends With the Strangeness of Zero,” from @QuantaMagazine.
Pair with Percival Everett’s provocative (and gloriously entertaining) Dr. No.
* Charles Seife, Zero: The Biography of a Dangerous Idea
Scheduling note: your correspondent is sailing again into uncommonly busy waters. So, with apologies for the hiatus, (R)D will resume on Friday the 25th…
###
As we noodle on noodling on nothing, we might send carefully-calculated birthday greetings to Erasmus Reinhold; he was born on this date in 1511. A professor of Higher Mathematics (at the University of Wittenberg, where he was ultimately Rector), Reinhold worked at a time when “mathematics” included applied mathematics, especially astronomy– to which he made many contributions and of which he was considered the most influential pedagogue of his generation.
Reinhold’s Prutenicae Tabulae (1551, 1562, 1571, and 1585) or Prussian Tables were astronomical tables that helped to disseminate calculation methods of Copernicus throughout the Empire. That said, Reinhold (like other astronomers before Kepler and Galileo) translated Copernicus’ mathematical methods back into a geocentric system, rejecting heliocentric cosmology on physical and theological grounds. Both Reinhold’s Prutenic Tables and Copernicus’ studies were the foundation for the Calendar Reform by Pope Gregory XIII in 1582… and both made copious use of zeros.









You must be logged in to post a comment.