(Roughly) Daily

Posts Tagged ‘neuroscience

“Zero is powerful because it is infinity’s twin. They are equal and opposite, yin and yang.”*…

Inside the Chaturbhuj Temple in India (left), a wall inscription features the oldest known instance of the digit zero, dated to 876 CE (right). It is part of the number 270.

… and like infinity, zero can be a cognitive challenge. Yasemin Saplakoglu explains…

Around 2,500 years ago, Babylonian traders in Mesopotamia impressed two slanted wedges into clay tablets. The shapes represented a placeholder digit, squeezed between others, to distinguish numbers such as 50, 505 and 5,005. An elementary version of the concept of zero was born.

Hundreds of years later, in seventh-century India, zero took on a new identity. No longer a placeholder, the digit acquired a value and found its place on the number line, before 1. Its invention went on to spark historic advances in science and technology. From zero sprang the laws of the universe, number theory and modern mathematics.

“Zero is, by many mathematicians, definitely considered one of the greatest — or maybe the greatest — achievement of mankind,” said the neuroscientist Andreas Nieder, who studies animal and human intelligence at the University of Tübingen in Germany. “It took an eternity until mathematicians finally invented zero as a number.”

Perhaps that’s no surprise given that the concept can be difficult for the brain to grasp. It takes children longer to understand and use zero than other numbers, and it takes adults longer to read it than other small numbers. That’s because to understand zero, our mind must create something out of nothing. It must recognize absence as a mathematical object.

“It’s like an extra level of abstraction away from the world around you,” said Benjy Barnett, who is completing graduate work on consciousness at University College London. Nonzero numbers map onto countable objects in the environment: three chairs, each with four legs, at one table. With zero, he said, “we have to go one step further and say, ‘OK, there wasn’t anything there. Therefore, there must be zero of them.’”

In recent years, research started to uncover how the human brain represents numbers, but no one examined how it handles zero. Now two independent studies, led by Nieder and Barnett, respectively, have shown that the brain codes for zero much as it does for other numbers, on a mental number line. But, one of the studies found, zero also holds a special status in the brain…

Read on to find out the ways in which new studies are uncovering how the mind creates something out of nothing: “How the Human Brain Contends With the Strangeness of Zero,” from @QuantaMagazine.

Pair with Percival Everett’s provocative (and gloriously entertaining) Dr. No.

Charles Seife, Zero: The Biography of a Dangerous Idea

Scheduling note: your correspondent is sailing again into uncommonly busy waters. So, with apologies for the hiatus, (R)D will resume on Friday the 25th…

###

As we noodle on noodling on nothing, we might send carefully-calculated birthday greetings to Erasmus Reinhold; he was born on this date in 1511. A professor of Higher Mathematics (at the University of Wittenberg, where he was ultimately Rector), Reinhold worked at a time when “mathematics” included applied mathematics, especially astronomy– to which he made many contributions and of which he was considered the most influential pedagogue of his generation.

Reinhold’s Prutenicae Tabulae (1551, 1562, 1571, and 1585) or Prussian Tables were astronomical tables that helped to disseminate calculation methods of Copernicus throughout the Empire. That said, Reinhold (like other astronomers before Kepler and Galileo) translated Copernicus’ mathematical methods back into a geocentric system, rejecting heliocentric cosmology on physical and theological grounds. Both Reinhold’s Prutenic Tables and Copernicus’ studies were the foundation for the Calendar Reform by Pope Gregory XIII in 1582… and both made copious use of zeros.

Prutenic Tables,1562 edition (source)

Written by (Roughly) Daily

October 22, 2024 at 1:00 am

“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”*…

The image above is from an engaging essay on “Why we need to figure out a theory of consciousness.” But, as Robert Lawrence Kuhn argues, in practice the issue with “the Hard Problem” might better be understood as the need to choose among, then build upon (one or a few of) the myriad theories that we have. Helpfully, Kuhn has surveyed and mapped that theoretical landscape…

Explanations of consciousness abound and the radical diversity of theories is telling. Explanations, or theories, are said to work at astonishingly divergent orders of magnitude and putative realms of reality. My purpose here must be humble: collect and categorize, not assess and adjudicate. Seek insights, not answers.

Unrealistically, I’d like to get them all, at least all contemporary theories that are sufficiently distinct with explanations that can surmount an arbitrary hurdle of rationality or conceivability. Falsification or verification is not on the agenda. I’m less concerned about the ontological truth of explanations/theories than in identifying them and then locating them on a “Landscape” to enable categorization and assess relationships. Next, I assess implications of categories for “big questions.” Thus, this Landscape is not about how consciousness is measured or evolved or even works, but about what consciousness is and what difference it makes.

It’s the classic “mind-body problem:” How do the felt experiences in our minds relate to the neural processes in our brains? How do mental states, whether sensory, cognitive, emotional, or even noumenal (selfless) awareness, correlate with brain states? The Landscape of Consciousness explanations or theories I want to draw is as broad as possible, including those that cannot be subsumed by, and possibly not even accessed by, the scientific method. This freedom from constraint, as it were, is no excuse for wooly thinking. Standards of rationality and clarity of argument must be maintained even more tenaciously, and bases of beliefs must be specified even more clearly.

I have two main aims: (i) gather and describe the various theories and array them in some kind of meaningful structure of high-level or first-order categories (and under Materialism, subcategories); and (ii) assess their implications, with respect to four big questions: meaning/purpose/value (if any); artificial intelligence (AI) consciousness; virtual immortality; and survival beyond death.

Theories overlap; some work together. Moreover, while a real-world landscape of consciousness, even simplified, would be drawn with three dimensions (at least), with multiple kinds and levels of nestings—a combinatorial explosion (and likely no closer to truth)—I satisfice with a one-dimensional toy-model. I array all the theories on a linear spectrum, simplistically and roughly, from the “most physical” on the left (at the beginning) to the “least physical” on the right (near the end). (I have two final categories after this spectrum.) The physicalism assumed in Materialism Theories of consciousness is characterized by naturalistic, science-based perspectives, while non-materialism theories have various degrees of nonphysicalist perspectives outside the ambit of current science and in some cases not subject to the scientific method of experimentation and replicability.

Please do not ascribe the relative importance of a theory to the relative size of its description. The shortest can be the strongest. It sometimes takes more words to describe lesser-known theories. For each description I feel the tension between conciseness and completeness. Moreover, several are not complete theories in themselves but ways to think about consciousness that strike me as original and perhaps insightful…

There follows are survey of the strands of thought/theory depicted here:

Absolutely fascinating: “A landscape of consciousness: Toward a taxonomy of explanations and implications,” from @RobertLawrKuhn via @RogersBacon1

… Who also published this apposite article: “A Paradigm for AI Consciousness.”

* Max Planck

###

As we examine explanations, we might send communicative birthday greetings to Camillo Golgi; he was born on this date in 1843. A biologist and pathologist, he discovered a staining technique called black reaction (sometimes called Golgi’s method or Golgi’s staining in his honor) in 1873, a major breakthrough in neuroscience. He was the first to identify axons and dendrites and their functions. He also identified the sense receptors of muscular sensations. Several structures and phenomena in anatomy and physiology are named for him, including the Golgi apparatus, the Golgi tendon organ and the Golgi tendon reflex.

Golgi’s investigations into the fine structure of the nervous system earned him (with the Spanish histologist Santiago Ramón y Cajal) the 1906 Nobel Prize for Physiology or Medicine.

source

“Analysis of death is not for the sake of becoming fearful but to appreciate this precious lifetime”*…

As Alex Blasdel explains, new research into the dying brain suggests the line between life and death may be less distinct than previously thought…

… For all that science has learned about the workings of life, death remains among the most intractable of mysteries. “At times I have been tempted to believe that the creator has eternally intended this department of nature to remain baffling, to prompt our curiosities and hopes and suspicions all in equal measure,” the philosopher William James wrote in 1909.

In 1976, the New York Times reported on the burgeoning scientific interest in “life after death” and the “emerging field of thanatology”. The following year, Moody and several fellow thanatologists founded an organisation that became the International Association for Near-Death Studies. In 1981, they printed the inaugural issue of Vital Signs, a magazine for the general reader that was largely devoted to stories of near-death experiences. The following year they began producing the field’s first peer-reviewed journal, which became the Journal of Near-Death Studies. The field was growing, and taking on the trappings of scientific respectability. Reviewing its rise in 1988, the British Journal of Psychiatry captured the field’s animating spirit: “A grand hope has been expressed that, through NDE research, new insights can be gained into the ageless mystery of human mortality and its ultimate significance, and that, for the first time, empirical perspectives on the nature of death may be achieved.”

But near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine. As researchers, the spiritualists’ aim was to collect as many reports of near-death experience as possible, and to proselytise society about the reality of life after death. Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.

The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individual. Many of them were physicians and psychiatrists who had been deeply affected after hearing the near-death stories of patients they had treated in the ICU. Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.

Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain. (Indeed, many of the states reported by near-death experiencers can apparently be achieved by taking a hero’s dose of ketamine.) Their basic premise was: no functioning brain means no consciousness, and certainly no life after death. Their task, which Borjigin took up in 2015, was to discover what was happening during near-death experiences on a fundamentally physical level.

Slowly, the spiritualists left the field of research for the loftier domains of Christian talk radio, and the parapsychologists and physicalists started bringing near-death studies closer to the scientific mainstream. Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221. Those articles have appeared everywhere from the Canadian Urological Association Journal to the esteemed pages of The Lancet.

Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries…

… Perhaps the story to be written about near-death experiences is not that they prove consciousness is radically different from what we thought it was…

… there is something that binds many of these people – the physicalists, the parapsychologists, the spiritualists – together. It is the hope that by transcending the current limits of science and of our bodies, we will achieve not a deeper understanding of death, but a longer and more profound experience of life. That, perhaps, is the real attraction of the near-death experience: it shows us what is possible not in the next world, but in this one…

Eminently worth reading in full: “The new science of death: ‘There’s something happening in the brain that makes no sense’,” from @unkowthe_again in @guardian.

* Dalai Lama

###

As we ponder passages, we might send innovative (and painless) birthday greetings to Robert Andrew Hingson; he was born on this date in 1913. An anesthesiologist and inventor, he is best known for three major inventions that continue to relieve pain and suffering worldwide today. One is a very portable respirator anesthesia gas machine and resuscitator, called the Western Reserve Midget, used to deliver a short-term, general anesthetic.

The second came from extensive experiments in the use of anesthesia to prevent pain during childbirth, leading to the invention of the continuous caudal epidural anesthesia technique.

The third and best known is his “peace gun,” a pistol-shaped jet injector that enabled efficient, mass, needle-less inoculation worldwide against such diseases as small pox, measles, tuberculosis, tetanus, leprosy, polio, and influenza. It can inoculate 1,000 persons per hour with several simultaneous vaccines.

source

“The control of large numbers is possible, and like unto that of small numbers, if we subdivide them”*…

It’s always been intuitively obvious that we handle small numbers more easily than large ones. But the discovery that the brain has different systems for representing small and large numbers provokes new questions about memory, attention, and mathematics…

More than 150 years ago, the economist and philosopher William Stanley Jevons discovered something curious about the number 4. While musing about how the mind conceives of numbers, he tossed a handful of black beans into a cardboard box. Then, after a fleeting glance, he guessed how many there were, before counting them to record the true value. After more than 1,000 trials, he saw a clear pattern. When there were four or fewer beans in the box, he always guessed the right number. But for five beans or more, his quick estimations were often incorrect.

Jevons’ description of his self-experiment, published in Nature in 1871, set the “foundation of how we think about numbers,” said Steven Piantadosi, a professor of psychology and neuroscience at the University of California, Berkeley. It sparked a long-lasting and ongoing debate about why there seems to be a limit on the number of items we can accurately judge to be present in a set.

Now, a new study in Nature Human Behaviour has edged closer to an answer by taking an unprecedented look at how human brain cells fire when presented with certain quantities. Its findings suggest that the brain uses a combination of two mechanisms to judge how many objects it sees. One estimates quantities. The second sharpens the accuracy of those estimates — but only for small numbers…

Although the new study does not end the debate, the findings start to untangle the biological basis for how the brain judges quantities, which could inform bigger questions about memory, attention and even mathematics…

One, two, three, four… and more: “Why the Human Brain Perceives Small Numbers Better,” from @QuantaMagazine.

* Sun Tzu

###

As we stew over scale, we might spare a thought for a man untroubled by larger (and more complicated) numbers, Émile Picard; he died on this date in 1941. A mathematician whose theories did much to advance research into analysis, algebraic geometry, and mechanics, he made his most important contributions in the field of analysis and analytic geometry. He used methods of successive approximation to show the existence of solutions of ordinary differential equations. Picard also applied analysis to the study of elasticity, heat, and electricity. He and  Henri Poincaré have been described as the most distinguished French mathematicians in their time.

Indeed, Picard was elected the fifteenth member to occupy seat 1 of the Académie française in 1924.

source

Written by (Roughly) Daily

December 11, 2023 at 1:00 am

“No problem can be solved from the same level of consciousness that created it”*…

Christof Koch settles his bet with David Chalmers (with a case of wine)

… perhaps especially not the problem of consciousness itself. At least for now…

A 25-year science wager has come to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest — and declared Chalmers the winner.

What ultimately helped to settle the bet was a key study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.

“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”

Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.

Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”…

Neuroscientist Christof Koch wagered philosopher David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues: “Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0,” from @Nature. Eminently worth reading in full for background and state-of-play.

* Albert Einstein

###

As we ponder pondering, we might spare a thought for Vannevar Bush; he died on this date in 1974. An engineer, inventor, and science administrator, he headed the World War II U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.

Bush also did his own work. Before the war, in 1925, at age 35, he developed the differential analyzer, the world’s first analog computer, capable of solving differential equations. It put into productive form, the mechanical concept left incomplete by Charles Babbage, 50 years earlier; and theoretical work by Lord Kelvin. The machine filled a 20×30 ft room. He seeded ideas later adopted as internet hypertext links.

source