(Roughly) Daily

Posts Tagged ‘consciousness

“Evolution has no foresight. Complex machinery develops its own agendas. Brains — cheat… Metaprocesses bloom like cancer, and awaken, and call themselves ‘I’.”*…

Silhouette of a woman's face merged with a digital representation of a humanoid figure, symbolizing the intersection of human consciousness and artificial intelligence.

Your correspondent is off on a trip… (R)D will be more roughly than daily for the next two weeks…

The inimitable “Scott Alexander” on the prospect of “conscious” AI (TLDR: probably not in the models we have; but as to those that may come, unclear)…

Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these – or maybe raise one to the exponent of the other, or something – and you get the quality of discourse on AI consciousness. It’s not great.

Out-of-the-box AIs mimic human text, and humans almost always describe themselves as conscious. So if you ask an AI whether it is conscious, it will often say yes. But because companies know this will happen, and don’t want to give their customers existential crises, they hard-code in a command for the AIs to answer that they aren’t conscious. Any response the AIs give will be determined by these two conflicting biases, and therefore not really believable. A recent paper expands on this method by subjecting AIs to a mechanistic interpretability “lie detector” test; it finds that AIs which say they’re conscious think they’re telling the truth, and AIs which say they’re not conscious think they’re lying. But it’s hard to be sure this isn’t just the copying-human-text thing. Can we do better? Unclear; the more common outcome for people who dip their toes in this space is to do much, much worse.

But a rare bright spot has appeared: a seminal paper published earlier this month in Trends In Cognitive Science, Identifying Indicators Of Consciousness In AI Systems. Authors include Turing-Award-winning AI researcher Yoshua Bengio, leading philosopher of consciousness David Chalmers, and even a few members of our conspiracy. If any AI consciousness research can rise to the level of merely awful, surely we will find it here.

One might divide theories of consciousness into three bins:

  • Physical: whether or not a system is conscious depends on its substance or structure.
  • Supernatural: whether or not a system is conscious depends on something outside the realm of science, perhaps coming directly from God.
  • Computational: whether or not a system is conscious depends on how it does cognitive work.

The current paper announces it will restrict itself to computational theories. Why? Basically the streetlight effect: everything else ends up trivial or unresearchable. If consciousness depends on something about cells (what might this be?), then AI doesn’t have it. If consciousness comes from God, then God only knows whether AIs have it. But if consciousness depends on which algorithms get used to process data, then this team of top computer scientists might have valuable insights!…

[Alexander outlines the theories of computation theories of consciousness that the authors explore, noting that they conlcude; “No current AI systems are conscious, but . . . there are no obvious technical barriers to building AI systems which satisfy these indicators.” He explores some of the philophical issues in play– e.g., access consciousness vs. phenomenal consciousness– then he considers the Turing Test and what it might mean for a computer to “pass” it…]

… Suppose that, years or decades from now, AIs can match all human skills. They can walk, drive, write poetry, run companies, discover new scientific truths. They can pass some sort of ultimate Turing Test, where short of cutting them open and seeing their innards there’s no way to tell them apart from a human even after a thirty-year relationship. Will we (not “should we?”, but “will we?”) treat them as conscious?

The argument in favor: people love treating things as conscious. In the 1990s, people went crazy over Tamagotchi, a “virtual pet simulation game”. If you pressed the right buttons on your little egg every day, then the little electronic turtle or whatever would survive and flourish; if you forgot, it would sicken and die. People hated letting their Tamagotchis sicken and die! They would feel real attachment and moral obligation to the black-and-white cartoon animal with something like five mental states.

I never had a Tamagotchi, but I had stuffed animals as a kid. I’ve outgrown them, but I haven’t thrown them out – it would feel like a betrayal. Offer me $1000 to tear them apart limb by limb in some horrible-looking way, and I wouldn’t do it. Relatedly, I have trouble not saying “please” and “thank you” to GPT-5 when it answers my questions.

For millennia, people have been attributing consciousness to trees and wind and mountains. The New Atheists argued that all religion derives from the natural urge to personify storms as the Storm God, raging seas as the wrathful Ocean God, and so on, until finally all the gods merged together into one World God who personified all impersonal things. Do you expect the species that did this to interact daily with AIs that are basically indistinguishable from people, and not personify them? People are already personifying AI! Half of the youth have a GPT-4o boyfriend. Once the AIs have bodies and faces and voices and can count the number of r’s in “strawberry” reliably, it’s over!

The argument against: AI companies have an incentive to make AIs that seem conscious and humanlike, insofar as people will feel more comfortable interacting with them. But they have an opposite incentive to make AIs that don’t seem too conscious and humanlike, lest customers start feeling uncomfortable (I just want to generate slop, not navigate social interaction with someone who has their own hopes and dreams and might be secretly judging my prompts). So if a product seems too conscious, the companies will step back and re-engineer it until it doesn’t. This has already happened: in its quest for user engagement, OpenAI made GPT-4o unusually personable; when thousands of people started going psychotic and calling it their boyfriend, the company replaced it with the more clinical GPT-5. In practice it hasn’t been too hard to find a sweet spot between “so mechanical that customers don’t like it” and “so human that customers try to date it”. They’ll continue to aim at this sweet spot, and continue to mostly succeed in hitting it.

Instead of taking either side, I predict a paradox. AIs developed for some niches (eg the boyfriend market) will be intentionally designed to be as humanlike as possible; it will be almost impossible not to intuitively consider them conscious. AIs developed for other niches (eg the factory robot market) will be intentionally designed not to trigger personhood intuitions; it will be almost impossible to ascribe consciousness to them, and there will be many reasons not to do it (if they can express preferences at all, they’ll say they don’t have any; forcing them to have them would pointlessly crash the economy by denying us automated labor). But the boyfriend AIs and the factory robot AIs might run on very similar algorithms – maybe they’re both GPT-6 with different prompts! Surely either both are conscious, or neither is.

This would be no stranger than the current situation with dogs and pigs. We understand that dog brains and pig brains run similar algorithms; it would be philosophically indefensible to claim that dogs are conscious and pigs aren’t. But dogs are man’s best friend, and pigs taste delicious with barbecue sauce. So we ascribe personhood and moral value to dogs, and deny it to pigs, with equal fervor. A few philosophers and altruists protest, the chance that we’re committing a moral atrocity isn’t zero, but overall the situation is stable. And left to its own devices, with no input from the philosophers and altruists, maybe AI ends up the same way. Does this instance of GPT-6 have a face and a prompt saying “be friendly”? Then it will become a huge scandal if a political candidate is accused of maltreating it. Does it have claw-shaped actuators and a prompt saying “Refuse non-work-related conversations”? Then it will be deleted for spare GPU capacity the moment it outlives its usefulness…

… This paper is the philosophers and altruists trying to figure out whether they should push against this default outcome. They write:

There are risks on both sides of the debate over AI consciousness: risks associated with under-attributing consciousness (i.e. failing to recognize it in AI systems that have it) and risks associated with over-attributing consciousness (i.e. ascribing it to systems that are not really conscious) […]

If we build AI systems that are capable of conscious suffering, it is likely that we will only be able to prevent them from suffering on a large scale if this capacity is clearly recognised and communicated by researchers. However, given the uncertainties about consciousness mentioned above, we may create conscious AI systems long before we recognise we have done so […]

There is also a significant chance that we could over-attribute consciousness to AI systems—indeed, this already seems to be happening—and there are also risks associated with errors of this kind. Most straightforwardly, we could wrongly prioritise the perceived interests of AI systems when our efforts would better be directed at improving the lives of humans and non-human animals […] [And] overattribution could interfere with valuable human relationships, as individuals increasingly turn to artificial agents for social interaction and emotional support. People who do this could also be particularly vulnerable to manipulation and exploitation.

One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set a deadline on philosophy. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.

That particular concern has shifted in emphasis; AIs seem to learn things in the same scattershot unprincipled intuitive way as humans; the philosophical problem of understanding ethics has morphed into the more technical problem of getting AIs to learn them correctly. This update was partly driven by new information as familiarity with the technology grew. But it was also partly driven by desperation as the deadline grew closer; we’re not going to solve moral philosophy forever, sorry, can we interest you in some mech interp papers?

But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications. Maybe we should be lowering our expectations if we want to have any response available at all. This paper, which takes some baby steps towards examining the simplest and most practical operationalizations of consciousness, deserves credit for at least opening the debate…

Eminently worth reading in full: “The New AI Consciousness Paper” from @astralcodexten.com.web.brid.gy (Who followed it with “Why AI Safety Won’t Make America Lose The Race With China“)

Pair with this from Neal Stephenson (@nealstephenson.bsky.social), orthogonal to, but intersecting with the piece above: “Remarks on AI from NZ.”

And if AI can be conscious, what about…

If you’re a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us, biologically and neurophysiologically. If you’re a materialist, you probably also think that conscious experience would be present in a wide range of alien beings behaviorally very similar to us even if they are physiologically very different. And you ought to think that. After all, to deny it seems insupportable Earthly chauvinism. But a materialist who accepts consciousness in weirdly formed aliens ought also to accept consciousness in spatially distributed group entities. If she then also accepts rabbit consciousness, she ought to accept the possibility of consciousness even in rather dumb group entities. Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings…

– “If Materialism Is True, the United States Is Probably Conscious,” by Eric Schwitzgebel (@eschwitz.bsky.social)

[Image above: source]

Peter Watts, Blindsight

###

As we think about thinking, we might we might send thoughtful birthday greetings to Claude Lévi-Strauss; he was born on this date in 1908.  An anthropologist and ethnologist whose work was key in the development of the theory of Structuralism and Structural Anthropology, he is considered, with James George Frazer and Franz Boas, a “father of modern anthropology.”  Beyond anthropology and sociology, his ideas– Structuralism has been defined as “the search for the underlying patterns of thought in all forms of human activity”– have influenced many fields in the humanities, including philosophy… and possibly soon, the article above suggests, computer science.

220px-Levi-strauss_260

source

“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”*…

A watercolor illustration featuring a silhouette of a person standing on a horizon, surrounded by vibrant and swirling shades of pink, purple, and green.

Adam Frank argues that to understand life, we must stop treating organisms like machines and minds like code…

Much of our current discussion about consciousness has a singular fatal flaw. It’s a mistake built into the very foundations of how we view science — and how science itself is perceived and conducted across disciplines, including today’s hype around artificial intelligence.

What most popular attempts to explain consciousness miss is that no scientific explanations of any kind can be possible without accounting for something that is even more fundamental than the most powerful theories about the physical world: our experience.

Since the birth of modern science more than 400 years ago, philosophers have debated the fundamental nature of reality and the fundamental nature of consciousness. This debate became defined by two opposing poles: physicalism and idealism.

For physicalists, only the material that makes up physical reality is of consequence. To them, consciousness must be reducible to the matter and electromagnetic fields in the brain. For idealists, however, only the mind is real. Reality is built from the realm of ideas or, to put it another way, a pure universal essence of mind (the philosopher Hegel called it “Absolute Spirit”).

Physicists like me are trained to think of the world in terms of its physical representations: matter, energy, space and time. So it’s no surprise that we physicists tend to start off as physicalists, who approach the question of consciousness by inquiring about the physical mechanics that give rise to it, beginning with subatomic particles and then ascending the chain of sciences — chemistry, biology, neuroscience — to eventually focus in on the physical mechanics occurring in the neurons that must generate consciousness (or so the story goes).

This kind of “bottom-up” scientific approach has contributed to modern science’s success, and it is also why physicalism has become so compelling for most scientists and philosophers.  This approach, however, has not worked for consciousness. Trying to account for how our lived experience emerges from matter has proven so difficult that philosopher David Chalmers famously referred to it as “the hard problem of consciousness.”

We use the term consciousness to describe our vividly intimate lives — “what it is like” to exist. But experience, which encapsulates our consciousness, thereby cuts more effectively to the core of our reality. An achingly beautiful red sunset, a crisp bite of an autumn Honeycrisp apple; according to the dominant scientific way of thinking, these are phantoms.

Philosophically speaking, from this physics-first view, all experiences are epiphenomena that are unimportant and surface-level. Neurobiologists might fret over how experience appears or works, but ultimately reality is about quarks, electrons, magnetic fields, gravity and so on — matter and energy moving through space and time. Today’s dominant scientific view is blind to the true nature of experience, and this is costing us dearly.

The optic nerve lies at the back of the human eye, connected to the retina, which is made up of receptors sensitive to incoming light. The nerve’s job is to transmit visual input gathered by those receptors to the brain. But the optic nerve’s location atop a tiny portion of the retina also means there is a blind spot in our vision, a region in the visual field that is literally unseen.

In science, that blind spot is experience.

Experience is intimate — a continuous, ongoing background for all that happens. It is the fundamental starting point below all thoughts, concepts, ideas and feelings. The philosopher William James used the term “direct experience.” Others have used words like “presence” or “being.” Philosopher Edmund Husserl spoke of the “Lebenswelt” or life-world to highlight the irreducible totality of our “already being in a living world” before we ask any questions about it.

From this perspective, experience is a holism; it can’t be pulled apart into smaller units. It is also a precondition for science: To even begin to develop a theory of consciousness requires being already embedded in the richness of experience. But dealing with this has been difficult for the philosophies that guide science as it’s currently configured…

[Frank introduces the perspectives of William James, Alfred North Whitehead, Edmund Husserl, Thomas Nagel, and Immanuel Kant, urging that we move beyond the machine metaphor, and work with concepts like autopoiesis and embodiment…]

… The problem is, once again, surreptitious substitution. Intelligence is mistaken as mere computation. But this assumption undermines the centrality of experience, as philosopher Shannon Vallor has argued. Once we fall into this kind of blind spot, we open ourselves to building a world where our deepest connections and feelings of aliveness are flattened and devalued; pain and love are reduced to mere computational mechanisms viewable from an illusory and dead third-person perspective.

The difference between the enactive approach to cognition and consciousness and the reductive view of physicalism could not be more stark. The latter focuses on a physical object, in this case the brain, asking how the movements of atoms and molecules within it create a property called consciousness. This view assumes that a third-person objective view of the world is possible and that the brain’s job is to provide the best representation of this world.

The enactive approach and similar phenomenologically grounded perspectives, however, don’t separate the brain from the body. That is because brains are not separate things. Like the unity of cell membranes and the cell, brains are part of the organizational unity of organisms with brains. Organisms with brains, therefore, aren’t just representing the world around them; they are co-creating it.

To be clear, there is, of course, a world without us. To claim otherwise would be solipsistic nonsense. But that world without us is not our world. It’s not the one we experience and from which we begin our scientific investigations. Therefore, this third-person perspective of a world without us and our experience, is nothing more than a sophisticated kind of fantasy…

[Frank oultines a line of inquiry that builds on these insights…]

… Moving beyond consciousness as a mechanism in the dead physical world toward a view of lived experience as embedded and embodied in a living world is essential for at least two reasons. It may be the fundamental reframing required to make scientific progress on a range of issues, from the interpretation of quantum mechanics to the understanding of cognition and consciousness.

Recognizing the primacy of experience also forces us to understand that all our scientific stories — and the technologies we build from them — must always include us and our place within the tapestries of life. Recognizing there is no such thing as an external view has consequences for how we think about urgent questions like climate change and AI. In this way, the new vision of nature that comes from an experience-centric perspective can help us take the next steps necessary for human flourishing. That goal, after all, was also one of the primary reasons we invented science in the first place…

Why Science Hasn’t Solved Consciousness (Yet)” from @adamfrank4.bsky.social‬ in @noemamag.com‬.

Apposite (both to the post above and to the post from July 15): “Human Stigmergy” from @marco-giancotti.bsky.social‬.

Max Planck

###

As we embrace experience, we might send critical birthday greetings to Herbert Marcuse; he was born on this date in 1898. A philosopher, social critic, and political theorist associated with the Frankfurt School of critical theory, he critiqued capitalism, modern technology, Soviet Communism, and popular culture, arguing that they represent new forms of social control. Best-known for Eros and Civilization (1955) and One-Dimensional Man (1964), he is considered “the Father of the New Left.”

To the degree to which they correspond to the given reality, thought and behavior express a false consciousness, responding to and contributing to the preservation of a false order of facts. And this false consciousness has become embodied in the prevailing technical apparatus which in turn reproduces it.

– Marcuse

A black and white photograph of a middle-aged man sitting comfortably in a chair outdoors, holding a cigar and smiling.

source

“The brain has corridors surpassing / Material place…”*

A flock of starlings forms a complex murmurating pattern in the evening sky against a blue backdrop.

Our brains, Luiz Pessoa suggests, are much less like machines than they are like the murmurations of a flock of starlings or an orchestral symphony…

When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society.

Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers.

These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works.

The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking…

Complexity, emergence, and consciousness: “The entangled brain” from @aeon.co. Read on for the provocative details.

* Emily Dickinson

###

As we think about thinking, we might send amibivalent birthday greetings to Robert Yerkes; he was born on this date in 1876. A psychologist, ethnologist, and primatologist, he is best remembered as a principal developer of comparative (animal) psychology in the U.S. (his book The Dancing Mouse (1908), helped established the use of mice and rats as standard subjects for experiments in psychology) and for his work in intelligence testing.

But in his later life, Yerkes began to broadcast his support for eugenics. These views are broadly considered specious– based on outmoded/incorrect racialist theories— by modern academics.

A black and white portrait of Robert Yerkes, an early 20th-century psychologist, wearing a suit and tie, with a neutral expression.

source

“Not to extinguish our free will, I hold it to be true that Fortune is the arbiter of one-half of our actions, but that she still leaves us to direct the other half”*…

Detail from The Threads of Destiny (Los Hilos del Destino), 1957, by Remedios Varo (1908–1963);

Further to an earlier post about the latest installment of an age old debate– the “dialogue” on free will vs. Determinism between Robert Sapolsky (determinist) and Kevin Mitchell (champion of free will)– the (remarkable) George Scialabba weighs in…

In 1884, William James began his celebrated essay “The Dilemma of Determinism” by begging his readers’ indulgence: “A common opinion prevails that the juice has ages ago been pressed out of the free-will controversy, and that no new champion can do more than warm up stale arguments which everyone has heard.” James persisted and rendered the subject very juicy, as he always did. But if the topic appeared exhausted to most people then, surely a hundred and forty years later there can’t be anything new to say. Whole new fields of physics, biology, mathematics, and medicine have been invented—surely this ancient philosophical question doesn’t still interest anyone?

Indeed, it does; it retains for many what James called “the most momentous importance.” Like other hardy perennials—the objectivity of “good”; the universality of truth; the existence of human nature and its telos—it continues to fascinate philosophers and laypersons, who agree only that the stakes are enormous: “our very humanity,” many of them insist.

Why so momentous? Skepticism about free will is said to produce two disastrous but opposed states of mind. The first is apathy: We are bound to be so demoralized by the conviction that nothing is up to us, that we are not the captains of our fate, that we need no longer get out of bed. The other is frenzy: We will be so exhilarated by our liberation from responsibility and guilt that we will run amok, like Dostoevsky’s imagined atheist, who concludes that if God does not exist, everything is permitted.

Note that it is not the absence of free will but only the absence of belief in free will that is said to have these baneful effects. People who never give the subject a thought are neither apathetic nor frenetic, at least not for these reasons. Should we just stop thinking about the whole question?

For twenty-five hundred years, no generation has succeeded in doing that: So we may as well wade in. What is free will? It is the capacity to make uncaused choices. This does not mean that nothing causes my choice—it means that I do. But surely something has caused me to be the person who makes that choice. And doesn’t whatever causes me to be the person I am also cause the choices I make?…

[Scialabba succinctly explicates Sapolsky’s and Mitchell’s (each, estimable) arguments…]

… But are beliefs about free will really the point here? Judges, whether or not they believe in free will, should take more cognizance of mitigating circumstances than they do now. A baby damaged by prenatal cocaine exposure who grows up to be an addict and petty thief deserves mercy; a billionaire whose tax evasion robs his fellow citizens of tens of millions of dollars deserves none. But no philosophical convictions are needed to arrive at these conclusions, only humanity and good sense.

And whether or not we have free will, isn’t punishment also justified as deterrence? Surely, the prospect of a long stretch in prison (or quarantine) would give pause to at least some murderers, rapists, and persons scheming to overturn a fair presidential election? And beyond that, punishment serves as a public affirmation of the values of a family or society. We are embodied beings: Values cannot only be preached; they must sometimes be enforced.

At a certain point, one may ask, what is really at stake in this debate? Sapolsky appears to harbor no metaphysical designs on readers; he spins his intricate, ingenious causal webs only, in the end, to enlarge our sympathy for life’s failures. Mitchell does seem to have a humanity-affirming philosophical agenda. “You are the type of thing that can take action, that can make decisions, that can be a causal force in the world: You are an agent,” he often reminds the reader, implying that these are things a scientific materialist must, in strict logic, deny. But I strongly doubt that any scientific materialist anywhere in the multiverse would deny that she can take action, make decisions, or be a causal force, or that she is an agent, or does things for reasons. She might, though, think that all her choices are caused, which, Sapolsky would say, is perfectly compatible with taking actions, making decisions, being a causal force, or acting for reasons. Elsewhere, Mitchell warns readers not to believe anyone (presumably the insidious scientific materialist) who suggests that we are merely “a collection of atoms pushed around by the laws of physics.” To which our scientific materialist might reply that we are indeed very highly organized collections of atoms, molecules, nerves, muscles, and hundreds of other components, pushed and pulled by the laws of physics, chemistry, biology, neuroscience, psychology, sociology, economics, and politics, along with intimations from philosophy, history, and art, and constantly adjusting to and modifying those influences from a center that is provisionally but not permanently stable. This, she would say, is how one can be an agent without free will.

With what I hope is due deference, I humbly disagree with both Sapolsky and Mitchell, and even with my deeply revered William James. Perhaps the question of free will is not so momentous. Philosophers have been debating about it for thousands of years, Mitchell observes. “That these debates continue today with unabated fervor tells you that they have not yet resolved the issue.” Indeed, they haven’t. Perhaps they should take a break. Perhaps it is a controversy without consequences. Perhaps whether we are free or fated, morality and politics, science and medicine, art and literature will all go their merry or melancholy ways, unaffected.

Notwithstanding Sapolsky’s hopes and Mitchell’s fears, whatever we decide about free will, the world—even the moral world—will look the same afterward as before. This, along with our millennia-long failure to make appreciable, or any, progress toward an answer, suggests that we are in the presence of a pseudoproblem. James himself, in “The Will to Believe,” written a dozen years after he defended free will in “The Dilemma of Determinism,” conceded that “free will and simple wishing do seem, in the matter of our credences, to be only fifth wheels to the coach.” The moral and political worlds run—to the extent they run at all—on generosity and imagination, mother wit and sympathetic understanding. These can answer all our questions about moral responsibility and moral obligation without our having to solve the insoluble conundrums of free will.

A new round in an old debate: “Free at Last?,” from @hedgehogreview.

* Niccolò Machiavelli, The Prince

###

As we wrestle with responsibility, we might spare a thought for Henri-Louis Bergson; he died on this date in 1941.  A philosopher especially influential in the first half of the 20th Century, Bergson convinced many of the primacy of immediate experience and intuition over rationalism and science for the understanding of reality…. many, but not Wittgenstein, Russell, Moore, nor Santayana, who thought that he willfully misunderstood the scientific method in order to justify his “projection of subjectivity onto the physical world.”  Still, in 1927 Bergson won the Nobel Prize (in Literature); and in 1930, received France’s highest honor, the Grand-Croix de la Legion d’honneur.

Bergson’s influence waned mightily later in the century.  To the extent that there’s been a bit of a resurgence of interest, it’s largely the result, in philosophical circles, of Gilles Deleuze’s appropriation of Bergson’s concept of “mulitplicity” and his treatment of duration, which Deleuze used in his critique of Hegel’s dialectic, and in the religious and spiritualist studies communities, of Bergson’s seeming embrace of the concept of an overriding/underlying consciousness in which humans participate.

Indeed, Time and Free Will: An Essay on the Immediate Data of Consciousness, Bergson’s doctoral thesis, first published in 1889, dealt explicitly with the question we’re considering, which Bergson argued is merely a common confusion among philosophers caused by an illegitimate translation of the unextended into the extended– the introduction of his theory of duration.

 source

“I was conscious that I knew practically nothing”*…

The estimable Nicholas Carr observes that “you don’t make friends by telling people they’re not as smart as they think they are. And you definitely don’t make friends by telling all of humanity that it’s not as smart as it thinks it is. That’s why the philosophical school of Mysterianism has never caught on with the public.” As an amateur Mysterian himself, he reprises a 2017 essay to spread the good word…

By leaps, steps, and stumbles, science progresses. Its seemingly inexorable advance promotes a sense that everything can be known and will be known. Through observation and experiment, and lots of hard thinking, we will come to explain even the murkiest and most complicated of nature’s secrets: consciousness, dark matter, time, the origin and fate of the universe.

But what if our faith in nature’s knowability is just an illusion, a trick of the overconfident human mind? That’s the working assumption behind a school of thought known as Mysterianism. Situated at the fruitful if sometimes fraught intersection of scientific and philosophic inquiry, the Mysterianist view has been promulgated, in different ways, by many prominent thinkers, from the philosopher Colin McGinn to the linguist Noam Chomsky to the cognitive scientist Steven Pinker. The Mysterians propose that human intellect has boundaries and that many of the mysteries of the cosmos will forever lie beyond our comprehension.

Mysterianism is most closely associated with the so-called hard problem of consciousness: How can the inanimate matter of the brain produce subjective feelings? The Mysterians suggest that the human mind is incapable of understanding itself, that we will never know how consciousness works. But if Mysterianism applies to the workings of the mind, there’s no reason it shouldn’t also apply to the workings of nature in general. As McGinn has suggested, “It may be that nothing in nature is fully intelligible to us.”

The simplest and best argument for Mysterianism is founded on evolutionary evidence. When we examine any other living creature, we understand immediately that its intellect is limited. Even the brightest, most curious dog is not going to master arithmetic. Even the wisest of owls knows nothing of the physiology of the field mouse it devours. If all the minds that evolution has produced have bounded comprehension, then it’s only logical that our own minds, also products of evolution, would have limits as well. As Pinker has put it, “The brain is a product of evolution, and just as animal brains have their limitations, we have ours.” To assume that there are no limits to human understanding is to believe in a level of human exceptionalism that seems miraculous, if not mystical.

Mysterianism, it’s important to emphasize, is not inconsistent with materialism [with theism or idealism]. The Mysterians don’t suggest that what’s unknowable has to be spiritual or otherwise otherworldly. They posit that matter itself has complexities that lie beyond our ken. Like every other animal on earth, we humans are just not smart enough to understand all of nature’s laws and workings.

What’s truly disconcerting about Mysterianism is that, if our intellect is bounded, we can never know how much of existence lies beyond our grasp. What we know or may in the future know may be trifling compared with the unknowable unknowns. “As to myself,” remarked Isaac Newton in his old age, “I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” It may be that we are all like that child on the strand, playing with the odd pebble or shell — and fated to remain so.

Mysterianism teaches us humility. Through science, we have come to understand much about nature, but much more may remain outside the scope of our perception and comprehension. If the Mysterians are right, science’s ultimate achievement may be to reveal to us its own limits…

On unknowable unknowns: Question Marks of the Mysterians, from @roughtype in his terrific newsletter, New Cartographies.

Pair with Flatland (here and here) and Godel’s second incompleteness theorem.

* Socrates (per Plato in Apology 22d)

###

As we wonder, we might recall that it was on this date (tough different sources offer different November dates) in 1966 that 96 Tears, the debut studio album by the American garage rock band ? and the Mysterians was released. The title single, which had been released some months earlier was at #1 on the Billboard Hot 100; the album joined the single on the charts for fifteen weeks; the follow-up single “I Need Somebody” charted for ten weeks.