Posts Tagged ‘consciousness’
“If all insects disappeared, all life on earth would perish. If all humans disappeared, all life on earth would flourish.”
As Lars Chittka explains, insects have surprisingly rich inner lives—a revelation that has wide-ranging ethical implications…
In the early 1990s, when I was a Ph.D. student at the Free University of Berlin modeling the evolution of bee color perception, I asked a botany professor for some advice about flower pigments. I wanted to know the degrees of freedom that flowers have in producing colors to signal to bees. He replied, rather furiously, that he was not going to engage in a discussion with me, because I worked in a neurobiological laboratory where invasive procedures on live honeybees were performed. The professor was convinced that insects had the capacity to feel pain. I remember walking out of the botanist’s office shaking my head, thinking the man had lost his mind.
Back then, my views were in line with the mainstream. Pain is a conscious experience, and many scholars then thought that consciousness is unique to humans. But these days, after decades of researching the perception and intelligence of bees, I am wondering if the Berlin botany professor might have been right.
Researchers have since shown that bees and some other insects are capable of intelligent behavior that no one thought possible when I was a student. Bees, for example, can count, grasp concepts of sameness and difference, learn complex tasks by observing others, and know their own individual body dimensions, a capacity associated with consciousness in humans. They also appear to experience both pleasure and pain. In other words, it now looks like at least some species of insects—and maybe all of them—are sentient.
These discoveries raise fascinating questions about the origins of complex cognition. They also have far-reaching ethical implications for how we should treat insects in the laboratory and in the wild…
Insects are key enablers of much life on earth. They appear to exhibit intelligence, and maybe more: “Do Insects Feel Joy and Pain?” in @sciam.
Bugs are not going to inherit the earth. They own it now. So we might as well make peace with the landlord.
– Thomas Eisner
Pair with this helpfully skeptical (but respectful) review of Chittka’s book, The Mind of a Bee.
* Jonas Salk
###
As we ponder our place, we might recall that it was on this date in 1897 that physician Sir Ronald Ross made a key breakthrough when he discovered malaria parasites while dissecting a mosquito. This day is now known as World Mosquito Day, in celebration of his critical discovery.

“No problem can be solved from the same level of consciousness that created it”*…
… perhaps especially not the problem of consciousness itself. At least for now…
A 25-year science wager has come to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest — and declared Chalmers the winner.
What ultimately helped to settle the bet was a key study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.
“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”
Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.
Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”…
Neuroscientist Christof Koch wagered philosopher David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues: “Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0,” from @Nature. Eminently worth reading in full for background and state-of-play.
* Albert Einstein
###
As we ponder pondering, we might spare a thought for Vannevar Bush; he died on this date in 1974. An engineer, inventor, and science administrator, he headed the World War II U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.
Bush also did his own work. Before the war, in 1925, at age 35, he developed the differential analyzer, the world’s first analog computer, capable of solving differential equations. It put into productive form, the mechanical concept left incomplete by Charles Babbage, 50 years earlier; and theoretical work by Lord Kelvin. The machine filled a 20×30 ft room. He seeded ideas later adopted as internet hypertext links.
“No problem can be solved from the same level of consciousness that created it”*…
Annaka Harris on the difficulty in understanding consciousness…
The central challenge to a science of consciousness is that we can never acquire direct evidence of consciousness apart from our own experience. When we look at all the organisms (or collections of matter) in the universe and ask ourselves, “Which of these collections of matter contain conscious experiences?” in the broadest sense, the answer has to be “some” or “all”—the only thing we have direct evidence to support is that the answer isn’t “none,” as we know that at least our own conscious experiences exist.
Until we attain a significantly more advanced understanding of the brain, and of many other systems in nature for that matter, we’re forced to begin with one of two assumptions: either consciousness arises at some point in the physical world, or it is a fundamental part of the physical world (some, or all). And the sciences have thus far led with the assumption that the answer is “some” (and so have I, for most of my career) for understandable reasons. But I would argue that the grounds for this starting assumption have become weaker as we learn more about the brain and the role consciousness plays in behavior.
The problem is that what we deem to be conscious processes in nature is based solely on reportability. And at the very least, the work with split-brain and locked-in patients should have radically shifted our reliance on reportability at this point…
The realization that all of our scientific investigations of consciousness are unwittingly rooted in a blind assumption led me to pose two questions that I think are essential for a science of consciousness to keep asking:
- Can we find conclusive evidence of consciousness from outside a system?
- Is consciousness causal? (Is it doing something? Is it driving any behavior?)
The truth is that we have less and less reason to respond “yes” to either question with any confidence.And if the answer to these questions is in fact “no,” which is entirely possible, we’ll be forced to reconsider our jumping off point. Personally I’m still agnostic, putting the chances that consciousness is fundamental vs. emergent at more or less 50/50. But after focusing on this topic for more than twenty years, I’m beginning to think that assuming consciousness is fundamental is actually a slightly more coherent starting place…
“The Strong Assumption,” from @annakaharris.
See also: “How Do We Think Beyond Our Own Existence?“, from @annehelen.
* Albert Einstein
###
As we noodle on knowing, we might recall that it was on this date in 1987 that a patent (U.S. Patent No. 4,666,425) was awarded to Chet Fleming for a “Device for Perfusing an Animal Head”– a device for keeping a severed head alive.
That device, described as a “cabinet,” used a series of tubes to accomplish what a body does for most heads that are not “discorped”—that is, removed from their bodies. In the patent application, Fleming describes a series of tubes that would circulate blood and nutrients through the head and take deoxygenated blood away, essentially performing the duties of a living thing’s circulatory system. Fleming also suggested that the device might also be used for grimmer purposes.
“If desired, waste products and other metabolites may be removed from the blood, and nutrients, therapeutic or experimental drugs, anti-coagulants and other substances may be added to the blood,” the patent reads.
Although obviously designed for research purposes, the patent does acknowledge that “it is possible that after this invention has been thoroughly tested on research animals, it might also be used on humans suffering from various terminal illnesses.”
Fleming, a trained lawyer who had the reputation of being an eccentric, wasn’t exactly joking, but he was worried that somebody would start doing this research. The patent was a “prophetic patent”—that is, a patent for something which has never been built and may never be built. It was likely intended to prevent others from trying to keep severed heads alive using that technology…
Smithsonian Magazine

“The limits of my language means the limits of my world”*…
It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…
In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.
But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.
…
Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.
And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…
What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?
What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…
There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.
This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.
Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.
…
Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…
Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.
(Image above: source)
* Ludwig Wittgenstein, Tractatus logigo-philosphicus
###
As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.
Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…
There is a wide range of opinions on AI and what it might portend. While artificial intelligence has its skeptics, and some argue that we should slow its development, AI is here, and it’s only getting warmed up (c.f.: Ezra Klein‘s “This Changes Everything“).
As applications multiply (and get more sophisticated), there’s an understandable concern about its impact on employment. While tools like ChatGPT and DALL·E 2 are roiling the creative sphere, many economists are looking more broadly…
Like many revolutionary technologies before it, AI is likely to eliminate jobs. But, as has been the case in the past, experts argue, AI will likely offset much of that by spurring the creation of new jobs in addition to enhancing many existing jobs. The big question is: what sort of jobs?
“AI will wipe out a lot of current jobs, as has happened with all past technologies,” said Lawrence Katz, a labor economist at Harvard. “But I have no reason to think that AI and robots won’t continue changing the mix of jobs. The question is: will the change in the mix of jobs exacerbate existing inequalities? Will AI raise productivity so much that even as it displaces a lot of jobs, it creates new ones and raises living standards?”
Anu Madgavkar, who leads labor market research at the McKinsey Global Institute, estimates that one in four workers in the US are going to see more AI and technology adopted in their jobs. She said 50-60% of companies say they are pursuing AI-related projects. “So one way or the other people are going to have to learn to work with AI,” Madgavkar said.
While past rounds of automation affected factory jobs most, Madgavkar said that AI will hit white-collar jobs most. “It’s increasingly going into office-based work and customer service and sales,” she said. “They are the job categories that will have the highest rate of automation adoption and the biggest displacement. These workers will have to work with it or move into different skills.”…
“US experts warn AI likely to kill off jobs – and widen wealth inequality“
But most of these visions are rooted in an appreciation of what AI can currently do (and the likely extensions of those capabilities). What if AI develops in startling, discontinuous ways– what if it exhibits “emergence”?…
… Recent investigations… have revealed that LLMs (large language models) can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes…
“The Unpredictable Abilities Emerging From Large AI Models“
Perhaps we should be thinking about AI not just functionally, but also philosophically…
The development of Artificial Intelligence is a scientific and engineering project, but it’s also a philosophical one. Lingering debates in the philosophy of mind have the potential to be substantially demystified, if not outright resolved, through the creation of artificial minds that parallel capabilities once thought to be the exclusive province of the human brain.
And since our brain is how we know and interface with the world more generally, understanding how the mind works can shed light on every other corner of philosophy as well, from epistemology to metaethics. My view is thus the exact opposite of Noam Chomsky’s, who argues that the success of Large Language Models is of limited scientific or philosophical import, since such models ultimately reduce to giant inscrutable matrices. On the contrary, the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum — one Chomsky chooses to simply dismiss a priori.
Biological brains differ in important ways from artificial neural networks, but the fact that the latter can emulate the capacities of the former really does contribute to human self-understanding. For one, it represents an independent line of evidence that the brain is indeed computational. But that’s just the tip of the iceberg. The success of LLMs may even help settle longstanding debates on the nature of meaning itself…
“We’re all Wittgensteinians now“
And maybe we should be careful about “othering” AI (or, for that matter, any of the other forms for intelligence that surround us)…
I don’t think there is such a thing as an artificial intelligence. There are multiple intelligences, many ways of doing intelligence. What I envisage to be more useful and interesting than artificial intelligence as we currently conceive of it—which is this incredibly reduced version of human intelligence— is something more distributed, more widely empowered, and more diverse than singular intelligence would allow for. It’s actually a conversation between multiple intelligences, focused on some narrow goals. I have a new, very long-term, very nascent project I’m calling Server Farm. And the vision of Server Farm is to create a setting in which multiple intelligences could work on a problem together. Those intelligences would be drawn from all different kinds of life. That could include computers, but it could also include fungi and plants and animals in some kind of information-sharing processing arrangement. The point is that it would involve more than one kind of thinking, happening in dialogue and relationship with each other.
James Bridle, “There’s Nothing Unnatural About a Computer“
In the end, Tyler Cowan suggests, we should keep developing AI…
…what kind of civilization is it that turns away from the challenge of dealing with more…intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI? Do you really want to press the button, giving us that kind of American civilization?…
We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge…
“Existential risk, AI, and the inevitable turn in human history“
Still, we’re human, and we would do well, Samuel Arbesman suggests, to use the best of our human “tools”– the humanities– to understand AI…
So go study the concepts of narrative technique and use them to elucidate the behavior of LLMs. Or examine the rhetorical devices that writers and speakers have been using for millennia—and which GPT models has imbibed—and figure out how to use their “physical” principles in relating to these language models.
Ultimately, we need a deeper kind of cultural and humanistic competence, one that doesn’t just vaguely gesture at certain parts of history or specific literary styles. It’s still early days, but we need more of this thinking. To quote Hollis Robbins again: “Nobody yet knows what cultural competence will be in the AI era.” But we must begin to work this out.
“AI, Semiotic Physics, and the Opcodes of Story World“
All of which is to suggest that we are faced with a future that may well contain currently-unimaginable capabilities, that can accrue as threats or (and) as opportunities. So, as the estimable Jaron Lanier reminds us, we need to remain centered…
“From my perspective,” he says, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”…
The way to ensure that we are sufficiently sane to survive is to remember it’s our humanness that makes us unique…
“Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’“
All of the above-sampled pieces are eminently worth reading in full.
Apposite (and offered without comment): Theta Noir
[Image above: source]
* Alan Kay
###
As we ponder progress, we might recall that it was on this date in 1979 that operators failed to notice that a relief valve was stuck open in the primary coolant system of Three Mile Island’s Unit 2 nuclear reactor following an unexpected shutdown. Consequently, enough coolant drained out of the system to allow the core to overheat and partially melt down– the worst commercial nuclear accident in American history.

You must be logged in to post a comment.