(Roughly) Daily

Posts Tagged ‘mind

“The brain has corridors surpassing / Material place…”*

A flock of starlings forms a complex murmurating pattern in the evening sky against a blue backdrop.

Our brains, Luiz Pessoa suggests, are much less like machines than they are like the murmurations of a flock of starlings or an orchestral symphony…

When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society.

Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers.

These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works.

The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking…

Complexity, emergence, and consciousness: “The entangled brain” from @aeon.co. Read on for the provocative details.

* Emily Dickinson

###

As we think about thinking, we might send amibivalent birthday greetings to Robert Yerkes; he was born on this date in 1876. A psychologist, ethnologist, and primatologist, he is best remembered as a principal developer of comparative (animal) psychology in the U.S. (his book The Dancing Mouse (1908), helped established the use of mice and rats as standard subjects for experiments in psychology) and for his work in intelligence testing.

But in his later life, Yerkes began to broadcast his support for eugenics. These views are broadly considered specious– based on outmoded/incorrect racialist theories— by modern academics.

A black and white portrait of Robert Yerkes, an early 20th-century psychologist, wearing a suit and tie, with a neutral expression.

source

“Zero is powerful because it is infinity’s twin. They are equal and opposite, yin and yang.”*…

Inside the Chaturbhuj Temple in India (left), a wall inscription features the oldest known instance of the digit zero, dated to 876 CE (right). It is part of the number 270.

… and like infinity, zero can be a cognitive challenge. Yasemin Saplakoglu explains…

Around 2,500 years ago, Babylonian traders in Mesopotamia impressed two slanted wedges into clay tablets. The shapes represented a placeholder digit, squeezed between others, to distinguish numbers such as 50, 505 and 5,005. An elementary version of the concept of zero was born.

Hundreds of years later, in seventh-century India, zero took on a new identity. No longer a placeholder, the digit acquired a value and found its place on the number line, before 1. Its invention went on to spark historic advances in science and technology. From zero sprang the laws of the universe, number theory and modern mathematics.

“Zero is, by many mathematicians, definitely considered one of the greatest — or maybe the greatest — achievement of mankind,” said the neuroscientist Andreas Nieder, who studies animal and human intelligence at the University of Tübingen in Germany. “It took an eternity until mathematicians finally invented zero as a number.”

Perhaps that’s no surprise given that the concept can be difficult for the brain to grasp. It takes children longer to understand and use zero than other numbers, and it takes adults longer to read it than other small numbers. That’s because to understand zero, our mind must create something out of nothing. It must recognize absence as a mathematical object.

“It’s like an extra level of abstraction away from the world around you,” said Benjy Barnett, who is completing graduate work on consciousness at University College London. Nonzero numbers map onto countable objects in the environment: three chairs, each with four legs, at one table. With zero, he said, “we have to go one step further and say, ‘OK, there wasn’t anything there. Therefore, there must be zero of them.’”

In recent years, research started to uncover how the human brain represents numbers, but no one examined how it handles zero. Now two independent studies, led by Nieder and Barnett, respectively, have shown that the brain codes for zero much as it does for other numbers, on a mental number line. But, one of the studies found, zero also holds a special status in the brain…

Read on to find out the ways in which new studies are uncovering how the mind creates something out of nothing: “How the Human Brain Contends With the Strangeness of Zero,” from @QuantaMagazine.

Pair with Percival Everett’s provocative (and gloriously entertaining) Dr. No.

Charles Seife, Zero: The Biography of a Dangerous Idea

Scheduling note: your correspondent is sailing again into uncommonly busy waters. So, with apologies for the hiatus, (R)D will resume on Friday the 25th…

###

As we noodle on noodling on nothing, we might send carefully-calculated birthday greetings to Erasmus Reinhold; he was born on this date in 1511. A professor of Higher Mathematics (at the University of Wittenberg, where he was ultimately Rector), Reinhold worked at a time when “mathematics” included applied mathematics, especially astronomy– to which he made many contributions and of which he was considered the most influential pedagogue of his generation.

Reinhold’s Prutenicae Tabulae (1551, 1562, 1571, and 1585) or Prussian Tables were astronomical tables that helped to disseminate calculation methods of Copernicus throughout the Empire. That said, Reinhold (like other astronomers before Kepler and Galileo) translated Copernicus’ mathematical methods back into a geocentric system, rejecting heliocentric cosmology on physical and theological grounds. Both Reinhold’s Prutenic Tables and Copernicus’ studies were the foundation for the Calendar Reform by Pope Gregory XIII in 1582… and both made copious use of zeros.

Prutenic Tables,1562 edition (source)

Written by (Roughly) Daily

October 22, 2024 at 1:00 am

“We ceased to be the lunatic fringe. We’re now the lunatic core.”*…

Further, in a fashion, to yesterday’s post on analog computing, an essay from Benjamin Labatut (the author of two remarkable works of “scientific-historical fiction,” When We Cease to Understand the World and The MANIAC, continuing the animating theme of those books…

We will never know how many died during the Butlerian Jihad. Was it millions? Billions? Trillions, perhaps? It was a fantastic rage, a great revolt that spread like wildfire, consuming everything in its path, a chaos that engulfed generations in an orgy of destruction lasting almost a hundred years. A war with a death toll so high that it left a permanent scar on humanity’s soul. But we will never know the names of those who fought and died in it, or the immense suffering and destruction it caused, because the Butlerian Jihad, abominable and devastating as it was, never happened.

The Jihad was an imagined event, conjured up by Frank Herbert as part of the lore that animates his science-fiction saga Dune. It was humanity’s last stand against sentient technology, a crusade to overthrow the god of machine-logic and eradicate the conscious computers and robots that in the future had almost entirely enslaved us. Herbert described it as “a thalamic pause for all humankind,” an era of such violence run amok that it completely transformed the way society developed from then onward. But we know very little of what actually happened during the struggle itself, because in the original Dune series, Herbert gives us only the faintest outlines—hints, murmurs, and whispers, which carry the ghostly weight of prophecy. The Jihad reshaped civilization by outlawing artificial intelligence or any machine that simulated our minds, placing a damper on the worst excesses of technology. However, it was fought so many eons before the events portrayed in the novels that by the time they occur it has faded into legend and crystallized in apocrypha. The hard-won lessons of the catastrophe are preserved in popular wisdom and sayings: “Man may not be replaced.” “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” “We do not trust the unknown which can arise from imaginative technology.” “We must negate the machines-that-think.” The most enduring legacy of the Jihad was a profound change in humankind’s relationship to technology. Because the target of that great hunt, where we stalked and preyed upon the very artifacts we had created to lift ourselves above the seat that nature had intended for us, was not just mechanical intelligence but the machinelike attitude that had taken hold of our species: “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments,” Herbert wrote.

Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program!

The Butlerian Jihad removed a crutch—the part of ourselves that we had given over to technology—and forced human minds to develop above and beyond the limits of mechanistic reasoning, so that we would no longer depend on computers to do our thinking for us.

Herbert’s fantasy, his far-flung vision of a devastating war between humanity and the god of machine-logic, seemed quaint when he began writing it in the Sixties. Back then, computers were primitive by modern standards, massive mainframe contraptions that could process only hundreds of thousands of cycles per second (instead of billions, like today), had very little memory, operated via punch cards, and were not connected to one another. And we have easily ignored Herbert’s warnings ever since, but now the Butlerian Jihad has suddenly returned to plague us. The artificial-intelligence apocalypse is a new fear that keeps many up at night, a terror born of great advances that seem to suggest that, if we are not very careful, we may—with our own hands—bring forth a future where humanity has no place. This strange nightmare is a credible danger only because so many of our dreams are threatening to come true. It is the culmination of a long process that hearkens back to the origins of civilization itself, to the time when the world was filled with magic and dread, and the only way to guarantee our survival was to call down the power of the gods.

Apotheosis has always haunted the soul of humankind. Since ancient times we have suffered the longing to become gods and exceed the limits nature has placed on us. To achieve this, we built altars and performed rituals to ask for wisdom, blessings, and the means to reach beyond our capabilities. While we tend to believe that it is only now, in the modern world, that power and knowledge carry great risks, primitive knowledge was also dangerous, because in antiquity a part of our understanding of the world and ourselves did not come from us, but from the Other. From the gods, from spirits, from raging voices that spoke in silence.

[Labatut invokes the mysteries of the Vedas and their Altar of Fire, which was meant to develop “a mind, (that) when properly developed, could fly like a bird with outstretched wings and conquer the skies.”…]

Seen from afar by people who were not aware of what was being made, these men and women must surely have looked like bricklayers gone mad. And that same frantic folly seems to possess those who, in recent decades, have dedicated their hearts and minds to the building of a new mathematical construct, a soulless copy of certain aspects of our thinking that we have chosen to name “artificial intelligence,” a tool so formidable that, if we are to believe the most zealous among its devotees, will help us reach the heavens and become immortal…

[Labatut recounts the stories– and works– of some of the creators of AI’s DNA: George Boole (and his logic), Claude Shannon (who put that logic to work), and Geoffrey Hinton (Boole’s great-great-grandson, and “the Godfather of AI,” who created of the first neural networks, but has more recently undergone a change of opinion)…]

… Hinton has been transformed. He has mutated from an evangelist of a new form of reason into a prophet of doom. He says that what changed his mind was the realization that we had, in fact, not replicated our intelligence, but created a superior one.

Or was it something else, perhaps? Did some unconscious part of him whisper that it was he, rather than his great-great-grandfather, who was intended by God to find the mechanisms of thought? Hinton does not believe in God, and he would surely deny his ancestor’s claim that pain is an instrument of the Lord’s will, since he was forced to have every one of his meals on his knees, resting on a pillow like a monk praying at the altar, because of a back injury that caused him excruciating pain. For more than seventeen years, he could not sit down, and only since 2022 has he managed to do so long enough to eat.

Hinton is adamant that the dangers of thinking machines are real. And not just short-term effects like job replacement, disinformation, or autonomous lethal weapons, but an existential risk that some discount as fantasy: that our place in the world might be supplanted by AI. Part of his fear is that he believes AI could actually achieve a sort of immortality, as the Vedic gods did. “The good news,” he has said, “is we figured out how to build things that are immortal. When a piece of hardware dies, they don’t die. If you’ve got the weights stored in some medium and you can find another piece of hardware that can run the same instructions, then you can bring it to life again. So, we’ve got immortality. But it’s not for us.”

Hinton seems to be afraid of what we might see when the embers of the Altar of Fire die down at the end of the sacrifice and the sharp coldness of the beings we have conjured up starts to seep into our bones. Are we really headed for obsolescence? Will humanity perish, not because of the way we treat all that surrounds us, nor due to some massive unthinking rock hurled at us by gravity, but as a consequence of our own irrational need to know all that can be known? The supposed AI apocalypse is different from the mushroom-cloud horror of nuclear war, and unlike the ravages of the wildfires, droughts, and inundations that are becoming commonplace, because it arises from things that we have, since the beginning of civilization, always considered positive and central to what makes us human: reason, intelligence, logic, and the capacity to solve the problems, puzzles, and evils that taint even the most fortunate person’s existence with everyday suffering. But in clawing our way to apotheosis, in daring to follow the footsteps of the Vedic gods who managed to escape from Death, we may shine a light on things that should remain in darkness. Because even if artificial intelligence never lives up to the grand and terrifying nightmare visions that presage a nonhuman world where algorithms hum along without us, we will still have to contend with the myriad effects this technology will have on human society, culture, and economics.

In the meantime, the larger specter of superintelligent AI looms over us. And while it is less likely and perhaps even impossible (nothing but a fairy tale, some say, a horror story intended to attract more money and investment by presenting a series of powerful systems not as the next step in our technological development but as a death-god that ends the world), it cannot be easily dispelled, for it reaches down and touches the fibers of our mythmaking apparatus, that part of our being that is atavistic and fearful, because it reminds us of a time when we shivered in caves and huddled together, while outside in the dark, with eyes that could see in the night, the many savage beasts and monsters of the past sniffed around for traces of our scent.

As every new AI model becomes stronger, as the voices of warning form a chorus, and even the most optimistic among us begin to fear this new technology, it is harder and harder to think without panic or to reason with logic. Thankfully, we have many other talents that don’t answer to reason. And we can always rise and take a step back from the void toward which we have so hurriedly thrown ourselves, by lending an ear to the strange voices that arise from our imagination, that feral territory that will always remain a necessary refuge and counterpoint to rationality.

Faced, as we are, with wild speculation, confronted with dangers that no one, however smart or well informed, is truly capable of managing or understanding, and taunted by the promises of unlimited potential, we may have to sound out the future not merely with science, politics, and reason, but with that devil-eye we use to see in the dark: fiction. Because we can find keys to doors we have yet to encounter in the worlds that authors have imagined in the past. As we grope forward in a daze, battered and bewildered by the capabilities of AI, we could do worse than to think about the desert planet where the protagonists of Herbert’s Dune novels sought to peer into the streaming sands of future time, under the heady spell of a drug called spice, to find the Golden Path, a way for human beings to break from tyranny and avoid extinction or stagnation by being more diverse, resilient, and free, evolving past purely logical reasoning and developing our minds and faculties to the point where our thoughts and actions are unpredictable and not bound by statistics. Herbert’s books, with their strange mixture of past and present, remind us that there are many ways in which we can continue forward while preserving our humanity. AI is here already, but what we choose to do with it and what limits we agree to place on its development remain decisions to be made. No matter how many billions of dollars are invested in the AI companies that promise to eliminate work, solve climate change, cure cancer, and rain down miracles unlike anything we have seen before, we can never fully give ourselves over to these mathematical creatures, these beings with no soul or sympathy, because they are neither alive nor conscious—at least not yet, and certainly not like us—so they do not share the contradictory nature of our minds.

In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them. But we should also consider a warning from Herbert, the central commandment he chose to enshrine at the heart of future humanity’s key religious text, a rule meant to keep us from becoming subservient to the products of our reason, and from bowing down before the God of Logic and his many fearsome offspring:

Thou shalt not make a machine in the likeness of a human mind

Before and after artificial intelligence: “The Gods of Logic” in @Harpers. Eminently worth reading in full.

For a less pessimistic view, see: “A Journey Through the Uncanny Valley: Our Relational Futures with AI,” from @dylanhendricks at @iftf.

* Geoffrey Hinton

###

As we deliberate on Daedalus’ caution, we might we might send fantastically far-sighted birthday greetings to a tecno-optimist who might likely have brushed aside Labatut’s concerns: Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884.

Gernsback held 80 patents at the time of his death; he founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio.  But it was as a writer and publisher that he probably left his most lasting mark:  In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.

The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories.  Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”

Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”

Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”

(Coincidentally, today is also the birthday– in 1906– of Philo T. Farnsworth, the man who actually did invent television.)

Gernsback, wearing one of his inventions, TV Glasses

source

“I am now no more than a pile of blood, bone, and meat that is unhappy”*…

Most of us are familiar with the placebo effect. Dr. Michael H. Berstein explains the “nocebo”…

The term “nocebo effect” derives from the Latin word nocere, which translates roughly as “to harm” (as in the Hippocratic injunction, primum non nocerefirst, do no harm). Whereas the better-known placebo effect is typically positive (the alleviation of pain or malaise through treatments that otherwise have no inherent therapeutic value); the nocebo effect is negative, often manifesting as headache, skin irritation, or nausea.

No surprise, then, that the nocebo effect has been called “the placebo effect’s evil twin.” It can be more formally summarized as “the occurrence of a harmful event that stems from conscious or subconscious expectations.” Or, more simply: When you expect to feel sick, you are more likely to feel sick.

Of course, human expectations come up in all sorts of banal, everyday contexts, such as when you tell a friend that you’re stuck in traffic and so he or she should expect your arrival in twenty minutes. But expectation is also an important term of art that academics use (sometimes interchangeably with “expectancy”), having been popularized by Dr. Irving Kirsch, who now serves as Associate Director of the Program in Placebo Studies at Harvard Medical School.

Kirsch’s work built on that of Dr. Henry Beecher, who served with the American military during World War II. While deployed in North Africa and Italy, he gave saltwater to wounded soldiers, but told them they were receiving a powerful painkiller. Beecher did not engage in this deception by choice, but by necessity: As an anesthesiologist treating a flood of battlefield injuries, he faced the difficult task of rationing his supply of morphine.

The roots of our understanding of the nocebo effect are more obscure. But we do find an early precedent involving the work of eighteenth-century German physician Franz Mesmer, best known for his interest in the eponymous proto-hypnotic therapy known as “mesmerism.” In the salons of Paris and Vienna, he promoted the idea that illnesses could be alleviated by using magnets to govern the flow of fluid in patients’ bodies. (If this sounds like obvious quackery, which it is, bear in mind that Mesmer lacked any of our modern-day tools of science. He lived in an era when bloodletting with leeches was still seen as state-of-the-art medical treatment.)

Louis XVI (yes, the French king of guillotine fame) learned of Mesmer’s claims, and (properly) regarded them with skepticism. He established a commission to investigate, led by none other than Benjamin Franklin, who was then serving as the United States Minister to France. The American polymath and Francophile performed what we would now refer to as placebo-controlled studies so as to (as the commission put it) “separate the effects of the imagination from those attributed to magnetism.”…

… The mind’s unfortunate ability to create suffering ex nihilo can sometimes affect large groups of people though a process of social contagion (or, in the more indelicate language of the past, hysterical contagion). One such example, known as “The June Bug,” occurred in a U.S. textile mill in 1962. Many employees began to feel dizzy and nauseous. Some vomited. Rumors of a mysterious bug that was biting employees began to circulate, and eventually 62 workers became ill. Yet a subsequent Centers for Disease Control and Prevention investigation determined that no bugs could be identified. Nor could investigators find any other physical cause of the illnesses. This type of phenomenon is now referred to as psychogenic illness—sickness caused by belief.

Over the course of history, there have been countless other examples of psychogenic illness, with symptoms ranging from hysterical laughter to seizures. Aldous Huxley, the famed author of Brave New World, described one such seventeenth-century example in his lesser-known historically-based novel, The Devils of Loudun. In the 1630s, as Huxley documents, an entire convent of Ursuline nuns in the western French community of Loudun became convinced that they’d been demonically possessed (complete with convulsions, and other symptoms recognizable to any connoisseur of the modern exorcism-themed horror-movie genre) due to the unholy machinations of a (genuinely licentious) local priest named Urbain Grandier.

Could such a mass outbreak occur today, in an era when few believe in demonic spirits? Consider that during 2016 and 2017, no fewer than 21 American diplomats serving in Cuba reported a range of bizarre neurological symptoms that later came to be collectively described as “Havana Syndrome.” News of the outbreak spread globally through American diplomatic networks, and eventually more than 200 U.S. diplomats became ill. One leading theory was that the Russian government was attacking American embassies and consulates with microwaves.

To be clear: We do not yet know for certain the cause of these ailments. And it is conceivable that speculation concerning Russian involvement may prove correct (even if the microwave theory is far-fetched). That said, the possibility of psychogenic effects is obvious, and I regard it as concerning that this theory seems to have been rejected out of hand by American officials.

In 2021, in fact, a senior State Department official who’d been mandated to oversee the task force investigating Havana Syndrome was pushed out of her role when she refused to take psychogenic illness off the menu of potential causes. A former C.I.A. officer who claimed he’d been affected by Havana Syndrome while serving in Moscow declared that failing to rule out “mass hysteria” as a cause was “grotesquely insulting to victims and automatically disqualifying to lead the task force.”

I suspect that if Ben Franklin were alive today, he might take a different view…

When we experience pain, depression, or illness based on nothing more than negative expectations: “The Placebo Effect’s Evil Twin,” from @mh_bernstein in @Quillette.

Adapted, with permission, from the forthcoming book, The Nocebo Effect: When Words Make You Sick, by Michael H. Bernstein, Ph.D., Charlotte Blease, Ph.D., Cosima Locher, Ph. D., and Walter A. Brown, M.D. Published by Mayo Clinic Press.

* J.M. Coetzee, Waiting for the Barbarians

###

As we adjust our attitude, we might recall that it was on this date in 1867 that Joseph Lister published the first of his series of articles in The Lancet on “The Antiseptic Principle of the Practice of Surgery.”  Lister, having noticed that carbolic acid (phenol) was used to deodorize sewage, had experimented with using it to spray surgical instruments, surgical incisions, and dressings.  The result, he reported, was a substantially reduced incidence of gangrene.

source

“Neither privacy nor publicity is dead, but technology will continue to make a mess of both”*…

Indeed, as neurotech advances, privacy concerns grow. Devices that connect brains to computers are increasingly sophisticated. But as Fletcher Reveley asks, can the nascent neurorights movement catch up?…

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time…

Advances in Mind-Decoding Technologies Raise Hopes (and Worries),” from @FletcherReveley in @undarkmag. Eminently worth reading in full.

And to complicate things further (though appropriately), see Eric Hoel‘s “Neuroscience is pre-paradigmatic. Consciousness is why.”

* danah boyd

###

As we ponder the personal, we might send telling birthday greetings to Rolla Harger; he was born on this date in 1890. Biochemistry and pharmacology department chairman of the Indiana University School of Medicine, he invented (in 1931) and patented (in 1936) the first field test for inebriation, the Drunkometer (the forerunner of the Breathalyzer), to test for driving under the influence.

Harger overseeing a test of his kit (source)

Written by (Roughly) Daily

January 14, 2024 at 1:00 am