(Roughly) Daily

Posts Tagged ‘Psychology

“What is the reward for knowing the worst?”*…

A humorous cartoon depicting a therapy session, where a patient sitting on a couch asks the doctor if he can resume his affection for his mother due to his marked improvement.

The estimable Adam Phillips on the (ultimately constructive) tension between psychoanalysis and (especially American) pragmatism…

When Richard Rorty​ wrote, in one of his many familiar pragmatist pronouncements, that the only way you can tell if something is true is if it helps you get the life you want, it sounded either like a provocative assertion or another advertisement, masquerading as epistemology, for consumer capitalism. How one feels about Rorty’s eloquent, deliberate and subtle brashness depends on one’s education and sensibility, on one’s cultural preferences and prejudices, and indeed on one’s politics. There may be a significant difference between getting the life I want and getting the life ‘we’ might want, between a certain kind of possessive, acquisitive individualism and a collective political project (the phrase ‘the life I want’ also implies a stability and a degree of certainty in myself; the idea of the life I want fixes the flux of myself). And there are also, by the same token, interesting difficulties in using Rorty’s pragmatist definition of truth in relation to psychoanalysis, which in a quite different way claims to have an interest in truth and in the lives people claim that they want. Rorty’s description of truth here, read in a psychoanalytic context, couldn’t easily be squared with, say, Lacan’s goal for psychoanalytic treatment, which, in the useful words of Slavoj Žižek, clearly seeks a different version of truth. Lacan’s goal for psychoanalytic treatment, Žižek writes, ‘is not the patient’s wellbeing, successful social life or personal fulfilment, but to bring the patient to confront the elementary co-ordinates and deadlocks of his or her desire’. It doesn’t sound as though helping the patient get the life he wants is among Lacan’s priorities (and ‘deadlocks’, of course, aren’t Rorty’s thing). This can’t help but make us wonder whether, or in what sense, Freud’s psychoanalysis has got anything to do with getting the life you want; and if it doesn’t, what it might be to do with. Freud does, after all, put wishing at the centre of his theory, but only to radically temper it; as if to say, what you think you want is where the problems start. And yet wanting is what, for both psychoanalysis and American pragmatism, there is, in William James’s words, ‘to be going on from’. Both Freud’s psychoanalysis and Rorty’s pragmatism tell us, in their different ways, why wanting matters, and also that wanting has become the thing we most want to know about, as though now we are simply our wants.

It is easy to forget that all accounts of the goals of psychoanalysis are prescriptions presented as descriptions. In the guise of telling us what the goal of psychoanalysis is – what the concept of cure is, what a successful treatment entails – theorists are simply giving us their own account of what they take a good life to be and what they assume a person wants (a person who walks into an analyst’s office walks into a vocabulary, and a vocabulary is always a vocabulary of wants). Psychoanalysts, to their credit, have been more than willing to tell us what the good is that we should seek; though not quite so willing to open up their proposed goods for discussion, or indeed to suggest that their proposed goods might be experiments in living and not absolute values. For Freud, the goal is recovering the capacity to love and work, or, rather more grimly, to turn hysterical misery into ordinary unhappiness. For Lacan it is ‘not giving ground relative to one’s desire’; for Klein it is reaching the Depressive Position; for Winnicott it is about enabling the patient to play and to surprise themselves; for Ferenczi the patient is not cured through free association, but cured when he can free associate, and so on and on and on. All the interesting psychoanalytic theorists are telling us what, in their view, constitutes a good life. Old-fashioned psychoanalysis always had a known destination.

What the Rortyan pragmatist wants us to ask is whether and in what way, say, Lacanian psychoanalysis helps us to get the life we want, understood in terms of the good we have been encouraged to seek. It does not need us to ask whether Lacanian theory and practice is in any sense true. Pragmatism wants us to ask, what is the life we want – or think we want? Whereas psychoanalysis wants us to ask, why do we not want to know what we want? (According to Michel Serres, the only modern question is: what is it you don’t want to know about yourself?) Psychoanalysis wants us to ask – against the grain of traditional philosophy – why do we obscure the good that we seek? Pragmatism takes for granted that the good we seek is what we want and asks us how we are going to go about getting it. Indeed, pragmatism tells us that we are good at knowing what we want and good at letting our wants change. In an implicit critique of, among other things, American pragmatism, Charles Taylor, in The Ethics of Authenticity, defines his notion of a moral ideal: ‘I mean a picture of what a better or higher mode of life would be where “better” and “higher” are defined not in terms of what we happen to desire or need, but offer a standard of what we ought to desire.’ Rorty’s work always runs the risk of seeming to promote a kind of capricious, impulsive egotism.

Clearly psychoanalysis and American pragmatism are uneasy bedfellows; they fall out over the phrase ‘knowing what you want’…

Read on to Phillips’ unpacking of that tension, and for his “resolution”: “On Getting the Life You Want,” from @lrb.co.uk‬.

(Image above: source)

* Donald Barthelme, Snow White

###

As we decipher desire, we might send experimental birthday greetings to Stanley Milgram; he was born on this date in 1933. A social psychologist, he is best known for his obedience experiment conducted at Yale University in 1961, three months after the start of the trial of German Nazi war criminal Adolf Eichmann in Jerusalem. The experiment found, unexpectedly, that a very high proportion of subjects (asked to administer painful electric shocks to “learners” they believed they were supervising) would fully obey the instructions, albeit reluctantly. Milgram first described his research in a 1963 article in the Journal of Abnormal and Social Psychology and later discussed his findings in greater depth in his 1974 book, Obedience to Authority: An Experimental View.

Among the most controversial of all psychology studies ever published, the experiment has been repeated many times around the globe, and with fairly consistent results; but its interpretations have been in dispute from the start.

Black and white portrait of a man with a beard, wearing a suit and tie, with a serious expression.

source

“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”*…

A watercolor illustration featuring a silhouette of a person standing on a horizon, surrounded by vibrant and swirling shades of pink, purple, and green.

Adam Frank argues that to understand life, we must stop treating organisms like machines and minds like code…

Much of our current discussion about consciousness has a singular fatal flaw. It’s a mistake built into the very foundations of how we view science — and how science itself is perceived and conducted across disciplines, including today’s hype around artificial intelligence.

What most popular attempts to explain consciousness miss is that no scientific explanations of any kind can be possible without accounting for something that is even more fundamental than the most powerful theories about the physical world: our experience.

Since the birth of modern science more than 400 years ago, philosophers have debated the fundamental nature of reality and the fundamental nature of consciousness. This debate became defined by two opposing poles: physicalism and idealism.

For physicalists, only the material that makes up physical reality is of consequence. To them, consciousness must be reducible to the matter and electromagnetic fields in the brain. For idealists, however, only the mind is real. Reality is built from the realm of ideas or, to put it another way, a pure universal essence of mind (the philosopher Hegel called it “Absolute Spirit”).

Physicists like me are trained to think of the world in terms of its physical representations: matter, energy, space and time. So it’s no surprise that we physicists tend to start off as physicalists, who approach the question of consciousness by inquiring about the physical mechanics that give rise to it, beginning with subatomic particles and then ascending the chain of sciences — chemistry, biology, neuroscience — to eventually focus in on the physical mechanics occurring in the neurons that must generate consciousness (or so the story goes).

This kind of “bottom-up” scientific approach has contributed to modern science’s success, and it is also why physicalism has become so compelling for most scientists and philosophers.  This approach, however, has not worked for consciousness. Trying to account for how our lived experience emerges from matter has proven so difficult that philosopher David Chalmers famously referred to it as “the hard problem of consciousness.”

We use the term consciousness to describe our vividly intimate lives — “what it is like” to exist. But experience, which encapsulates our consciousness, thereby cuts more effectively to the core of our reality. An achingly beautiful red sunset, a crisp bite of an autumn Honeycrisp apple; according to the dominant scientific way of thinking, these are phantoms.

Philosophically speaking, from this physics-first view, all experiences are epiphenomena that are unimportant and surface-level. Neurobiologists might fret over how experience appears or works, but ultimately reality is about quarks, electrons, magnetic fields, gravity and so on — matter and energy moving through space and time. Today’s dominant scientific view is blind to the true nature of experience, and this is costing us dearly.

The optic nerve lies at the back of the human eye, connected to the retina, which is made up of receptors sensitive to incoming light. The nerve’s job is to transmit visual input gathered by those receptors to the brain. But the optic nerve’s location atop a tiny portion of the retina also means there is a blind spot in our vision, a region in the visual field that is literally unseen.

In science, that blind spot is experience.

Experience is intimate — a continuous, ongoing background for all that happens. It is the fundamental starting point below all thoughts, concepts, ideas and feelings. The philosopher William James used the term “direct experience.” Others have used words like “presence” or “being.” Philosopher Edmund Husserl spoke of the “Lebenswelt” or life-world to highlight the irreducible totality of our “already being in a living world” before we ask any questions about it.

From this perspective, experience is a holism; it can’t be pulled apart into smaller units. It is also a precondition for science: To even begin to develop a theory of consciousness requires being already embedded in the richness of experience. But dealing with this has been difficult for the philosophies that guide science as it’s currently configured…

[Frank introduces the perspectives of William James, Alfred North Whitehead, Edmund Husserl, Thomas Nagel, and Immanuel Kant, urging that we move beyond the machine metaphor, and work with concepts like autopoiesis and embodiment…]

… The problem is, once again, surreptitious substitution. Intelligence is mistaken as mere computation. But this assumption undermines the centrality of experience, as philosopher Shannon Vallor has argued. Once we fall into this kind of blind spot, we open ourselves to building a world where our deepest connections and feelings of aliveness are flattened and devalued; pain and love are reduced to mere computational mechanisms viewable from an illusory and dead third-person perspective.

The difference between the enactive approach to cognition and consciousness and the reductive view of physicalism could not be more stark. The latter focuses on a physical object, in this case the brain, asking how the movements of atoms and molecules within it create a property called consciousness. This view assumes that a third-person objective view of the world is possible and that the brain’s job is to provide the best representation of this world.

The enactive approach and similar phenomenologically grounded perspectives, however, don’t separate the brain from the body. That is because brains are not separate things. Like the unity of cell membranes and the cell, brains are part of the organizational unity of organisms with brains. Organisms with brains, therefore, aren’t just representing the world around them; they are co-creating it.

To be clear, there is, of course, a world without us. To claim otherwise would be solipsistic nonsense. But that world without us is not our world. It’s not the one we experience and from which we begin our scientific investigations. Therefore, this third-person perspective of a world without us and our experience, is nothing more than a sophisticated kind of fantasy…

[Frank oultines a line of inquiry that builds on these insights…]

… Moving beyond consciousness as a mechanism in the dead physical world toward a view of lived experience as embedded and embodied in a living world is essential for at least two reasons. It may be the fundamental reframing required to make scientific progress on a range of issues, from the interpretation of quantum mechanics to the understanding of cognition and consciousness.

Recognizing the primacy of experience also forces us to understand that all our scientific stories — and the technologies we build from them — must always include us and our place within the tapestries of life. Recognizing there is no such thing as an external view has consequences for how we think about urgent questions like climate change and AI. In this way, the new vision of nature that comes from an experience-centric perspective can help us take the next steps necessary for human flourishing. That goal, after all, was also one of the primary reasons we invented science in the first place…

Why Science Hasn’t Solved Consciousness (Yet)” from @adamfrank4.bsky.social‬ in @noemamag.com‬.

Apposite (both to the post above and to the post from July 15): “Human Stigmergy” from @marco-giancotti.bsky.social‬.

Max Planck

###

As we embrace experience, we might send critical birthday greetings to Herbert Marcuse; he was born on this date in 1898. A philosopher, social critic, and political theorist associated with the Frankfurt School of critical theory, he critiqued capitalism, modern technology, Soviet Communism, and popular culture, arguing that they represent new forms of social control. Best-known for Eros and Civilization (1955) and One-Dimensional Man (1964), he is considered “the Father of the New Left.”

To the degree to which they correspond to the given reality, thought and behavior express a false consciousness, responding to and contributing to the preservation of a false order of facts. And this false consciousness has become embodied in the prevailing technical apparatus which in turn reproduces it.

– Marcuse

A black and white photograph of a middle-aged man sitting comfortably in a chair outdoors, holding a cigar and smiling.

source

“Small irritations can lead to exaggerated reactions”*…

A historical black and white photograph of a group of lumberjacks posing in front of a log cabin, showcasing traditional attire and tools from the late 19th century.

From the annals of abnormal behavior…

In the late 19th century, a rare and highly unusual neuropsychiatric condition was observed among a group of French-Canadian lumberjacks living in the Moosehead Lake region of northern Maine. Those affected exhibited an extreme and exaggerated startle reflex. When startled by a sudden movement or loud noise, they reacted with dramatic involuntary responses, such as leaping into the air, screaming, repeating words, or instantly obeying shouted commands. It was reported that the “jumpers” were primarily of French descent, born in Canada, and worked as lumbermen in the Maine woods.

The mystery of the Jumping Frenchmen of Maine first drew the attention of the scientific community in 1878, when prominent American neurologist George Miller Beard informed members of the American Neurological Association at its annual meeting that he had heard accounts of these lumberjacks and their unusual nervous condition. Two years later, Beard himself travelled to the Moosehead Lake region to see first-hand if the accounts were true. He wasn’t disappointed…

I found two of the Jumpers employed about the hotel. With one of them, a young man twenty-seven years of age, I made the following experiments:

1. While sitting in a chair, with a knife in his hand, with which he was about to cut his tobacco, he was struck sharply on the shoulder, and told to “throw it.” Almost as quick as the explosion of a pistol, he threw the knife, and it stuck in a beam opposite; at the same time he repeated the order “throw it” with a certain cry as of terror or alarm.

2. A moment after, while filling his pipe with tobacco, he was again slapped on the shoulder and told to “throw it.” He threw the tobacco and the pipe on the grass, at least a rod away, with the same cry and the same suddenness and explosiveness of movement

After observing and examining many jumpers, Beard concluded that jumping was a type of nervous disorder. In a paper published in 1881, Beard wrote:

Jumping is a psychical or mental form of nervous disease, and is of a functional character. Its best analogue is psychical or mental hysteria, the so-called ‘servant-girl hysteria,’ as known to us in modern days, and as very widely known during the epidemics of the middle ages.

Beard surmised that the syndrome of jumping might be tied to tickling:

This disease was probably an evolution of tickling. Some, if not all, of the Jumpers, are ticklish—exceedingly so—and are easily irritated by touching them in sensitive parts of the body. It would appear that in the evenings, in the woods, after the day’s toil, in lieu of most other sources of amusement, the lumbermen have teased each other, by tickling, and playing, and startling timid ones, until there has developed this jumping, which, by mental contagion, and by practice, and by inheritance, has ripened into the full stage of the malady as it appears at the present hour.

Jumping was also found to be strongly tied to families indicating a genetic condition…

[There follow a series of accounts of “jumpers” in other locations (almost all timber-adjacent) and of the evolving explantions offered by experts, concluding…]

… In 1965, Reuben Rabinovitch, an assistant professor of neurology at McGill University, wrote a letter to the editor of the Canadian Medical Association Journal , where he described a children’s game he had witnessed in the Laurentian Mountains, north of Montreal. In this game, a child would secretly follow another, jab them in the ribs, and imitate the sound of a kicking horse. The “victim” was expected to respond by mimicking the sound, leaping into the air, and flinging their arms outward. This form of horseplay, he noted, often continued into adulthood, particularly in isolated villages or lumber camps where recreational outlets were scarce.

Rabinovitch concluded that the Jumping Frenchman syndrome was not a neurological disorder per se, but rather a conditioned reflex that developed out of the monotony and social isolation of life in lumber camps. According to this interpretation, the behaviour became institutionalized within a close-knit community as a form of interaction and entertainment. When the traditional logging camps gradually disappeared, so too did the jumping behaviour.

Further support for this view came in 1986, when two Canadian neurologists and a psychologist studied eight individuals in Quebec who exhibited jumping behaviours. The researchers found that all of the men had developed the condition during adolescence, shortly after beginning work in lumber camps. They reported being teased and provoked by other workers until the jumping behaviour became ingrained.

Based on this evidence, some scholars have argued that the Jumping Frenchman syndrome is not a medical condition or a case of collective hysteria, but a classic case of operant conditioning —a learned behaviour reinforced by social stimuli—that developed in a closed community.

The long and the short of it is that the Jumping Frenchmen of Maine may have more to do with human nature than with neurology. In the rough, close-knit world of lumber camps, where entertainment was scarce, a peculiar habit took hold, that slowly developed into a cultural quirk, or even a very strange joke that went too far. As the lifestyle that nurtured it faded, so did the jumping.

Nothing conclusive has yet been established. The National Organization for Rare Disorders (NORD) still lists Jumping as “an extremely rare disorder” with “no specific therapy”. While it acknowledges the theory of operant conditioning, NORD notes that some researchers believe that jumping Frenchmen of Maine may be a somatic neurological disorder, caused by a gene mutation that occurs after fertilization and is not inherited from the parents or passed on to children.

The organization concludes that further research is needed to understand the exact causes and underlying mechanisms of the Jumping Frenchmen of Maine, as well as other culturally-specific startle disorders…

The Jumping Frenchmen of Maine,” from @AmusingPlanet.

Vivek Murthy

###

As we query curious comportment, we might send birthday greetings of uncertain provenance to Charles Thomas Jackson; he was born on this date in 1805. A physician and scientist, he was active in medicine, chemistry, mineralogy, and geology, in that lattermost of which he was particularly distinguished.

That said, he is probably best remembered for a series of spectacular claims he made to the work of others: the discovery of guncotton (Christian Friedrich Schönbein), the telegraph (Samuel F. B. Morse), the digestive action of the stomach (William Beaumont), and the anesthetic effects of ether (William T. G. Morton). These claims continued until, in 1873, he was hospitalized at McLean Hospital. It was widely believed at the time that the reason was mental illness, either through a seizure or having a manic episode upon seeing Morton’s tombstone.

In fact, Jackson had suffered a left brain stroke that affected his language area. While he never regained his speech, he was cooperative and did not exhibit “inappropriate behavior of insanity.” By unanimous vote of the McLean Asylum Trustees, Jackson was hosted as a guest at the hospital at no charge for the entire duration of his stay as a recognition of his [very real] past contributions.

A historical engraving of Charles Thomas Jackson, a 19th-century physician and scientist, depicted wearing glasses and a formal suit, with a beard and a serious expression.

source

“Status is welcome, agreeable, pleasant, & hard to obtain in the world”*…

Illustration of two figures standing on a staircase made of books, with one figure holding a book above their head, symbolizing knowledge and expertise.

We live in a time when a certain kind of status– expertise– is under attack. Dan Williams suggests that by celebrating “common sense” over expert authority, populism performs a dramatic status inversion. It gifts uneducated voters the power of knowledge and deflates those who look down on them…

… As Will Storr argues in The Status Game, humiliation is the “nuclear bomb of the emotions”. When ignited, it can fuel everything from genocide to suicide, mass atrocities to self-immolation. There are few parts of human nature more chaotic, dangerous, or self-destructive. And yet, there is often a rationale underlying these reactions rooted in the strange nature of human sociality.

If humans were solitary animals, we would have evolved to approximate the behaviour of Homo economicus, the idealised rational agent imagined in much of twentieth century economics. We would act in ways that are predictable, sensible, and consistent. The characters depicted in Dostoevsky’s novels would be unintelligible to such a creature, except as victims of mental illness.

But we are not. We are social creatures, and almost everything puzzling and paradoxical about our species is downstream of this fact.

For one thing, we rely on complex networks of cooperation to achieve almost all our goals. Given this, much of human behaviour is rooted not in ordinary material self-interest but in the need to gain access to such networks—to win approval, cultivate a good reputation, and attract partners, friends, and allies. Human decision-making occurs within the confines of this social scrutiny. We evaluate almost every action, habit, and preference not just by its immediate effects but by its reputational impact.

At the same time, much of human competition is driven by the desire for prestige. In well-functioning human societies, individuals advance their interests not by bullying and dominating others but by impressing them. These high-status individuals are admired, respected, and deferred to. They win esteem and all its benefits. Their lives feel meaningful and purposeful.

In contrast, those who fail at the status game—who stack up at the bottom of the prestige hierarchy—experience shame and humiliation. If their position feels unfair, they become resentful and angry. In extreme cases, they might take vengeance on those who look down on them. Or they might take their own life. In some cases, such as mass killings by young men who “lose face” and run “amok” (a Malay word, illustrating the behaviour’s cross-cultural nature), they do both…

… The name of this newsletter, “Conspicuous Cognition”, is inspired by Veblen’s ideas about economics. Just as he sought to correct a misguided tendency to treat economics through a narrowly economic lens, my work and writings seek to correct a similarly misguided tendency to treat cognition—how we think, form beliefs, generate ideas, evaluate evidence, communicate, and so on—through a narrowly cognitive lens.

Much cognition is competitive and conspicuous. People strive to show off their intelligence, knowledge, and wisdom. They compete to win attention and recognition for making novel discoveries or producing rationalisations of what others want to believe. They often reason not to figure out the truth but to persuade and manage their reputation. They often form beliefs not to acquire knowledge but to signal their impressive qualities and loyalties.

Placed in this context of social competition and impression management, what might be called “epistemic charity”—the free offer of knowledge and expertise—takes on a different appearance. Although this charity can be driven by disinterested altruism (think of parents educating their children), it can also result from status competition and a desire to show off.

In some cases, people are happy to receive such epistemic charity and heap praise and admiration on those who provide it. The wonders of modern science emerge from a status game that celebrates those who make discoveries. However, we sometimes recoil at the thought of admitting someone has discovered something new, or—even worse—that they know better than we do. When that happens, we are not sceptical of the truth of their ideas, although we might choose to frame things that way. Rather, their offer of knowledge carries a symbolic significance we want to reject. It hurts our pride. It feels humiliating.

On a small scale, this feeling is an everyday occurrence. Few people like to be corrected, to admit they are wrong, or to acknowledge another’s superior knowledge, wisdom, or intelligence. On a larger scale, it might be implicated in some of the most significant and dangerous trends in modern politics.

Many of our most profound political problems appear to be entangled with epistemic issues. Think of our alleged crises of “disinformation”, “misinformation”, “post-truth”, and conspiracy theories. Think of the spread of viral lies and falsehoods on social media. Think of intense ideological polarisation, vicious political debates, and heated culture wars, disagreements and conflicts that ultimately concern what is true.

A critical aspect of these problems is the so-called “crisis of expertise”, the widespread populist rejection of claims advanced in institutions like science, universities, public health organisations, and mainstream media. Famously, many populists have “had enough of experts.” As Trump once put it, “The experts are terrible.”

This rejection of expertise goes beyond mere scepticism. It is actively hostile. The Trump administration’s recent attacks on Harvard and other elite universities provide one illustration of this hostility, but there are many others. Most obviously, there is the proud willingness among many populists to spread and accept falsehoods, conspiracy theories, and quack science in the face of an exasperated barrage of “fact-checks” from establishment institutions. Why are these corrections so politically impotent? Why do so many voters refuse to “follow the science” or “trust the experts”?

Experts have produced many theories. Some point to ignorance and stupidity. Some point to disinformation and mass manipulation. Some point to partisan media, echo chambers, and algorithms. And some suggest that the crisis might be related to objective failures by experts themselves.

There is likely some truth in all these explanations. Nevertheless, they share a common assumption: that the “crisis of expertise” is best understood in epistemic terms. They assume that populist hostility to the expert class reflects scepticism that their expertise is genuine—that they really know what they claim to know.

Perhaps this assumption is mistaken. Perhaps at least in some cases, the crisis of expertise is less about doubting expert knowledge than about rejecting the social hierarchy that “trust the experts” implies… some populists might sooner accept ignorance than epistemic charity from those they refuse to acknowledge as superior…

… If this analysis is correct, the populist rejection of expertise is not merely an intellectual disagreement over truth or evidence, even if it is typically presented that way. It is, in part, a proud refusal to accept epistemic charity from those who present themselves as social superiors.

In the case of populist elites and conspiracy theorists, this refusal is often driven by objectionable feelings of grandiosity and narcissism. However, for many ordinary voters, it may serve as a more understandable dignity-defence mechanism, a refusal to accept the social meanings implied by one-way deference to elites with alien values. It is less “post-truth” than anti-humiliation.

This would help to explain several features of the populist rejection of expertise.

First, there is its emotional signature. In many cases, the populist refusal to defer to experts appears to be wrapped up in intense emotions of resentment, indignation, and defiant pride, rather than simple scepticism.

Second, the rejection of expert authority often has a performative character. Experts are not merely ignored. They are actively, angrily, and proudly rejected. Like Captain Snegiryov, the populist publicly tramples on the expert’s offer of knowledge.

Third, there is the destructive aspect of many populist sentiments. If the issue were merely scepticism of experts and establishment institutions, the solution would presumably involve targeted reforms designed to make them more reliable. As recent Republican attacks on elite universities make clear, many populists prefer to take a sledgehammer to these institutions. The explosive hostility towards public health experts during the pandemic provides another telling example.

Finally, there is the fact that populists often embrace anti-intellectualism as an identity marker, a badge of pride. The valorisation of gut instincts, the proposed “revolution of common sense”, and the embrace of slogans like “do your own research” affirm the status of those who prioritise intuition over experts. The demonisation of “ivory tower academics”, “blue-haired”, “woke” professors, and the “chattering classes” are crafted to have a similar effect. This all looks more like status-inverting propaganda than intellectual disagreements over truth and trustworthiness.

To understand is not to forgive. Just as we can empathise with Snegiryov’s refusal of much-needed money whilst condemning it as short-sighted and self-destructive, we can try to understand the populist rejection of expertise without endorsing or justifying it.

To be clear, there are profound problems with our expert class and elite institutions. They routinely make errors, sometimes catastrophic ones, and often wield their social authority in ways that advance their own interests over the public good. The Iraq war, the financial crisis, and the many failures of policy and communication throughout the pandemic provide powerful illustrations of these expert failures, but there are many others.

Moreover, the social and political uniformity of experts today creates legitimate concerns about their trustworthiness. When scientific journals, public health authorities, and fact-checking organisations are obviously shaped by the values, partisan allegiances, and sensibilities of highly educated, progressive professionals, it is reasonable for those with very different values and identities to become mistrustful of them.

Nevertheless, there is no alternative to credentialed experts in complex, modern societies. To address the political challenges we confront today, we need specialised training, rigorous standards of evidence, and coordinated activity within institutions carefully engineered to produce knowledge. Although these institutions must be reformed in countless ways, they are indispensable.

Given this, the populists’ rejection of expertise does not liberate them from bias and error. It guarantees bias and error. Gut instincts, intuition, and “common sense” are fundamentally unreliable ways of producing knowledge. As we see with the MAGA media ecosystem today, the valorisation of such methods means returning to a pre-scientific, medieval worldview dominated by baseless conspiracy theories, snake oil medicine, economic illiteracy, and know-nothing punditry.

And yet, the dangers associated with this style of politics underscore the importance of understanding its causes. If the crisis of expertise is partly rooted in feelings of status threat, resentment, and humiliation, this has significant implications for how we should think about—and address—this crisis.

Most obviously, it suggests that purely epistemic solutions will have limited efficacy. You cannot fact-check your way out of status competition. And as long as the acceptance of expert guidance is experienced as an admission of social inferiority, there will be a lucrative market for demagogues and bullshitters who produce more status-affirming narratives.

Moreover, it suggests that rebuilding trust in experts means more than improving their reliability, as crucial as that is. Institutions dominated by a single social class and political tribe will inevitably face resistance and backlash in broader society, regardless of their technical competence.

We do not just need better ways of producing knowledge. We need to rethink how knowledge is offered: in ways that respect people’s pride and minimise the humiliations of one-sided epistemic charity…

Eminently worth reading in full: “Status, class, and the crisis of expertise,” from @danwphilosophy.bsky.social‬.

(Image above: source)

* Buddha (Ittha Sutta, AN 5.43)

###

As we dig dignity, we might send classy birthday greetings to George Bryan “Beau” Brummell; he was born on this date in 1778. An important figure in Regency England (a close pal of the Prince Regent, the future King George IV), he became the the arbiter of men’s fashion in London and in the territories under its cultural sway. 

Brummell was remembered afterwards as the preeminent example of the dandy; a whole literature was founded on his manner and witty sayings, e.g. “Fashions come and go; bad taste is timeless.”

Portrait of George Bryan 'Beau' Brummell, a prominent figure in Regency England known for his influence on men's fashion.

source

Written by (Roughly) Daily

June 7, 2025 at 1:00 am

“The brain has corridors surpassing / Material place…”*

A flock of starlings forms a complex murmurating pattern in the evening sky against a blue backdrop.

Our brains, Luiz Pessoa suggests, are much less like machines than they are like the murmurations of a flock of starlings or an orchestral symphony…

When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society.

Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers.

These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works.

The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking…

Complexity, emergence, and consciousness: “The entangled brain” from @aeon.co. Read on for the provocative details.

* Emily Dickinson

###

As we think about thinking, we might send amibivalent birthday greetings to Robert Yerkes; he was born on this date in 1876. A psychologist, ethnologist, and primatologist, he is best remembered as a principal developer of comparative (animal) psychology in the U.S. (his book The Dancing Mouse (1908), helped established the use of mice and rats as standard subjects for experiments in psychology) and for his work in intelligence testing.

But in his later life, Yerkes began to broadcast his support for eugenics. These views are broadly considered specious– based on outmoded/incorrect racialist theories— by modern academics.

A black and white portrait of Robert Yerkes, an early 20th-century psychologist, wearing a suit and tie, with a neutral expression.

source