(Roughly) Daily

Posts Tagged ‘Brain

“Zero is powerful because it is infinity’s twin. They are equal and opposite, yin and yang.”*…

Inside the Chaturbhuj Temple in India (left), a wall inscription features the oldest known instance of the digit zero, dated to 876 CE (right). It is part of the number 270.

… and like infinity, zero can be a cognitive challenge. Yasemin Saplakoglu explains…

Around 2,500 years ago, Babylonian traders in Mesopotamia impressed two slanted wedges into clay tablets. The shapes represented a placeholder digit, squeezed between others, to distinguish numbers such as 50, 505 and 5,005. An elementary version of the concept of zero was born.

Hundreds of years later, in seventh-century India, zero took on a new identity. No longer a placeholder, the digit acquired a value and found its place on the number line, before 1. Its invention went on to spark historic advances in science and technology. From zero sprang the laws of the universe, number theory and modern mathematics.

“Zero is, by many mathematicians, definitely considered one of the greatest — or maybe the greatest — achievement of mankind,” said the neuroscientist Andreas Nieder, who studies animal and human intelligence at the University of Tübingen in Germany. “It took an eternity until mathematicians finally invented zero as a number.”

Perhaps that’s no surprise given that the concept can be difficult for the brain to grasp. It takes children longer to understand and use zero than other numbers, and it takes adults longer to read it than other small numbers. That’s because to understand zero, our mind must create something out of nothing. It must recognize absence as a mathematical object.

“It’s like an extra level of abstraction away from the world around you,” said Benjy Barnett, who is completing graduate work on consciousness at University College London. Nonzero numbers map onto countable objects in the environment: three chairs, each with four legs, at one table. With zero, he said, “we have to go one step further and say, ‘OK, there wasn’t anything there. Therefore, there must be zero of them.’”

In recent years, research started to uncover how the human brain represents numbers, but no one examined how it handles zero. Now two independent studies, led by Nieder and Barnett, respectively, have shown that the brain codes for zero much as it does for other numbers, on a mental number line. But, one of the studies found, zero also holds a special status in the brain…

Read on to find out the ways in which new studies are uncovering how the mind creates something out of nothing: “How the Human Brain Contends With the Strangeness of Zero,” from @QuantaMagazine.

Pair with Percival Everett’s provocative (and gloriously entertaining) Dr. No.

Charles Seife, Zero: The Biography of a Dangerous Idea

Scheduling note: your correspondent is sailing again into uncommonly busy waters. So, with apologies for the hiatus, (R)D will resume on Friday the 25th…

###

As we noodle on noodling on nothing, we might send carefully-calculated birthday greetings to Erasmus Reinhold; he was born on this date in 1511. A professor of Higher Mathematics (at the University of Wittenberg, where he was ultimately Rector), Reinhold worked at a time when “mathematics” included applied mathematics, especially astronomy– to which he made many contributions and of which he was considered the most influential pedagogue of his generation.

Reinhold’s Prutenicae Tabulae (1551, 1562, 1571, and 1585) or Prussian Tables were astronomical tables that helped to disseminate calculation methods of Copernicus throughout the Empire. That said, Reinhold (like other astronomers before Kepler and Galileo) translated Copernicus’ mathematical methods back into a geocentric system, rejecting heliocentric cosmology on physical and theological grounds. Both Reinhold’s Prutenic Tables and Copernicus’ studies were the foundation for the Calendar Reform by Pope Gregory XIII in 1582… and both made copious use of zeros.

Prutenic Tables,1562 edition (source)

Written by (Roughly) Daily

October 22, 2024 at 1:00 am

“Right now I’m having amnesia and déjà vu at the same time. I think I’ve forgotten this before.”*…

The author, far left, as a very young child

Our first three years are usually a blur, and we don’t remember much before age seven. Kristin Ohlson wondered why…

… Freud argued that we repress our earliest memories because of sexual trauma but, until the 1980s, most researchers assumed that we retained no memories of early childhood because we created no memories – that events took place and passed without leaving a lasting imprint on our baby brains. Then in 1987, a study by the Emory University psychologist Robyn Fivush and her colleagues dispelled that misconception for good, showing that children who were just 2.5 years old could describe events from as far as six months into their past.

But what happens to those memories? Most of us assume that we can’t recall them as adults because they’re just too far back in our past to tug into the present, but this is not the case. We lose them when we’re still children…

To form long-term memories, an array of biological and psychological stars must align, and most children lack the machinery for this alignment. The raw material of memory – the sights, sounds, smells, tastes and tactile sensations of our life experiences – arrive and register across the cerebral cortex, the seat of cognition. For these to become memory, they must undergo bundling in the hippocampus, a brain structure named for its supposed resemblance to a sea horse, located under the cerebral cortex. The hippocampus not only bundles multiple input from our senses together into a single new memory, it also links these sights, sounds, smells, tastes, and tactile sensations to similar ones already stored in the brain. But some parts of the hippocampus aren’t fully developed until we’re adolescents, making it hard for a child’s brain to complete this process.

‘So much has to happen biologically to store a memory,’ the psychologist Patricia Bauer of Emory University told me. There’s ‘a race to get it stabilised and consolidated before you forget it. It’s like making Jell-O: you mix the stuff up, you put it in a mould, and you put it in the refrigerator to set, but your mould has a tiny hole in it. You just hope your Jell-O – your memory – gets set before it leaks out through that tiny hole.’

In addition, young children have a tenuous grip on chronology. They are years from mastering clocks and calendars, and thus have a hard time nailing an event to a specific time and place. They also don’t have the vocabulary to describe an event, and without that vocabulary, they can’t create the kind of causal narrative that [that’s] at the root of a solid memory. And they don’t have a greatly elaborated sense of self, which would encourage them to hoard and reconsider chunks of experience as part of a growing life-narrative.

Frail as they are, children’s memories are then susceptible to a process called shredding. In our early years, we create a storm of new neurons in a part of the hippocampus called the dentate gyrus and continue to form them throughout the rest of our lives, although not at nearly the same rate. A recent study by the neuroscientists Paul Frankland and Sheena Josselyn of the Hospital for Sick Children in Toronto suggests that this process, called neurogenesis, can actually create forgetting by disrupting the circuits for existing memories.

Our memories can become distorted by other people’s memories of the same event or by new information, especially when that new information is so similar to information already in storage. For instance, you meet someone and remember their name, but later meet a second person with a similar name and become confused about the name of the first person. We can also lose our memories when the synapses that connect neurons decay from disuse. ‘If you never use that memory, those synapses can be recruited for something different,’ Bauer told me.

Memories are less vulnerable to shredding and disruptions as the child grows up. Most of the solid memories that we carry into the rest of our lives are formed during what’s called ‘the reminiscence bump’, from ages 15 to 30, when we invest a lot of energy in examining everything to try to figure out who we are. The events, culture and people of that time remain with us and can even overshadow the features of our ageing present, according to Bauer. The movies were the best back then, and so was the music, and the fashion, and the political leaders, and the friendships, and the romances. And so on…

Why we remember so little from our youngest years: “The great forgetting,” from @kristinohlson in @aeonmag.

* Steven Wright

###

As we stroll down memory lane, we might spare a thought for Benjamin McLane Spock; he died on this date in 1998.  The first pediatrician to study psychoanalysis to try to understand children’s needs and family dynamics, he collected his findings in a 1946 book, The Common Sense Book of Baby and Child Care, which was criticized in some academic circles as being too reliant on anecdotal evidence, and in some conservative circles for promoting (what Norman Vincent Peale and others called) “permissiveness” by parents.  Despite that push-back, it became one of the best-selling volumes in history, having sold at the time of Spock’s death in 1998 over 50 million copies in 40 languages.

220px-Benjamin_McLane_Spock_(1976)

source

“No problem can be solved from the same level of consciousness that created it”*…

Christof Koch settles his bet with David Chalmers (with a case of wine)

… perhaps especially not the problem of consciousness itself. At least for now…

A 25-year science wager has come to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest — and declared Chalmers the winner.

What ultimately helped to settle the bet was a key study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.

“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”

Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.

Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”…

Neuroscientist Christof Koch wagered philosopher David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues: “Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0,” from @Nature. Eminently worth reading in full for background and state-of-play.

* Albert Einstein

###

As we ponder pondering, we might spare a thought for Vannevar Bush; he died on this date in 1974. An engineer, inventor, and science administrator, he headed the World War II U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.

Bush also did his own work. Before the war, in 1925, at age 35, he developed the differential analyzer, the world’s first analog computer, capable of solving differential equations. It put into productive form, the mechanical concept left incomplete by Charles Babbage, 50 years earlier; and theoretical work by Lord Kelvin. The machine filled a 20×30 ft room. He seeded ideas later adopted as internet hypertext links.

source

“The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office”*…

For as long as humans have thought, humans have thought about thinking. George Cave on the power and the limits of the metaphors we’ve used to do that…

For thousands of years, humans have described their understanding of intelligence with engineering metaphors. In the 3rd century BCE, the invention of hydraulics popularized the model of fluid flow (“humours”) in the body. This lasted until the 1500s, supplanted by the invention of automata and the idea of humans as complex machines. From electrical and chemical metaphors in the 1700s to advances in communications a century later, each metaphor reflected the most advanced thinking of that era. Today is no different: we talk of brains that store, process and retrieve memories, mirroring the language of computers.

I’ve always believed metaphors to be helpful and productive in communicating unfamiliar concepts. But this fascinating history of cognitive science metaphors shows that flawed metaphors can take hold and limit the scope for alternative ideas. In the worst case, the EU spent 10 years and $1.3 billion building a model of the brain based on the incorrect belief that the brain functions like a computer…

Thinking about thinking, from @George_Cave in @the_prepared.

Apposite: “Finding Language in the Brain.”

* Robert Frost

###

As we cogitate on cognition, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

Written by (Roughly) Daily

December 9, 2022 at 1:00 am

“To sleep: perchance to dream: ay, there’s the rub”*…

I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”

That’s overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.

Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…

Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.

(Image above: Gontzal García del CañoCC BY-NC-SA, modified)

* Shakespeare, Hamlet

###

As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.

source