Posts Tagged ‘social criticism’
“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”*…
Adam Frank argues that to understand life, we must stop treating organisms like machines and minds like code…
Much of our current discussion about consciousness has a singular fatal flaw. It’s a mistake built into the very foundations of how we view science — and how science itself is perceived and conducted across disciplines, including today’s hype around artificial intelligence.
What most popular attempts to explain consciousness miss is that no scientific explanations of any kind can be possible without accounting for something that is even more fundamental than the most powerful theories about the physical world: our experience.
Since the birth of modern science more than 400 years ago, philosophers have debated the fundamental nature of reality and the fundamental nature of consciousness. This debate became defined by two opposing poles: physicalism and idealism.
For physicalists, only the material that makes up physical reality is of consequence. To them, consciousness must be reducible to the matter and electromagnetic fields in the brain. For idealists, however, only the mind is real. Reality is built from the realm of ideas or, to put it another way, a pure universal essence of mind (the philosopher Hegel called it “Absolute Spirit”).
Physicists like me are trained to think of the world in terms of its physical representations: matter, energy, space and time. So it’s no surprise that we physicists tend to start off as physicalists, who approach the question of consciousness by inquiring about the physical mechanics that give rise to it, beginning with subatomic particles and then ascending the chain of sciences — chemistry, biology, neuroscience — to eventually focus in on the physical mechanics occurring in the neurons that must generate consciousness (or so the story goes).
This kind of “bottom-up” scientific approach has contributed to modern science’s success, and it is also why physicalism has become so compelling for most scientists and philosophers. This approach, however, has not worked for consciousness. Trying to account for how our lived experience emerges from matter has proven so difficult that philosopher David Chalmers famously referred to it as “the hard problem of consciousness.”
We use the term consciousness to describe our vividly intimate lives — “what it is like” to exist. But experience, which encapsulates our consciousness, thereby cuts more effectively to the core of our reality. An achingly beautiful red sunset, a crisp bite of an autumn Honeycrisp apple; according to the dominant scientific way of thinking, these are phantoms.
Philosophically speaking, from this physics-first view, all experiences are epiphenomena that are unimportant and surface-level. Neurobiologists might fret over how experience appears or works, but ultimately reality is about quarks, electrons, magnetic fields, gravity and so on — matter and energy moving through space and time. Today’s dominant scientific view is blind to the true nature of experience, and this is costing us dearly.
The optic nerve lies at the back of the human eye, connected to the retina, which is made up of receptors sensitive to incoming light. The nerve’s job is to transmit visual input gathered by those receptors to the brain. But the optic nerve’s location atop a tiny portion of the retina also means there is a blind spot in our vision, a region in the visual field that is literally unseen.
In science, that blind spot is experience.
Experience is intimate — a continuous, ongoing background for all that happens. It is the fundamental starting point below all thoughts, concepts, ideas and feelings. The philosopher William James used the term “direct experience.” Others have used words like “presence” or “being.” Philosopher Edmund Husserl spoke of the “Lebenswelt” or life-world to highlight the irreducible totality of our “already being in a living world” before we ask any questions about it.
From this perspective, experience is a holism; it can’t be pulled apart into smaller units. It is also a precondition for science: To even begin to develop a theory of consciousness requires being already embedded in the richness of experience. But dealing with this has been difficult for the philosophies that guide science as it’s currently configured…
[Frank introduces the perspectives of William James, Alfred North Whitehead, Edmund Husserl, Thomas Nagel, and Immanuel Kant, urging that we move beyond the machine metaphor, and work with concepts like autopoiesis and embodiment…]
… The problem is, once again, surreptitious substitution. Intelligence is mistaken as mere computation. But this assumption undermines the centrality of experience, as philosopher Shannon Vallor has argued. Once we fall into this kind of blind spot, we open ourselves to building a world where our deepest connections and feelings of aliveness are flattened and devalued; pain and love are reduced to mere computational mechanisms viewable from an illusory and dead third-person perspective.
The difference between the enactive approach to cognition and consciousness and the reductive view of physicalism could not be more stark. The latter focuses on a physical object, in this case the brain, asking how the movements of atoms and molecules within it create a property called consciousness. This view assumes that a third-person objective view of the world is possible and that the brain’s job is to provide the best representation of this world.
The enactive approach and similar phenomenologically grounded perspectives, however, don’t separate the brain from the body. That is because brains are not separate things. Like the unity of cell membranes and the cell, brains are part of the organizational unity of organisms with brains. Organisms with brains, therefore, aren’t just representing the world around them; they are co-creating it.
To be clear, there is, of course, a world without us. To claim otherwise would be solipsistic nonsense. But that world without us is not our world. It’s not the one we experience and from which we begin our scientific investigations. Therefore, this third-person perspective of a world without us and our experience, is nothing more than a sophisticated kind of fantasy…
[Frank oultines a line of inquiry that builds on these insights…]
… Moving beyond consciousness as a mechanism in the dead physical world toward a view of lived experience as embedded and embodied in a living world is essential for at least two reasons. It may be the fundamental reframing required to make scientific progress on a range of issues, from the interpretation of quantum mechanics to the understanding of cognition and consciousness.
Recognizing the primacy of experience also forces us to understand that all our scientific stories — and the technologies we build from them — must always include us and our place within the tapestries of life. Recognizing there is no such thing as an external view has consequences for how we think about urgent questions like climate change and AI. In this way, the new vision of nature that comes from an experience-centric perspective can help us take the next steps necessary for human flourishing. That goal, after all, was also one of the primary reasons we invented science in the first place…
“Why Science Hasn’t Solved Consciousness (Yet)” from @adamfrank4.bsky.social in @noemamag.com.
Apposite (both to the post above and to the post from July 15): “Human Stigmergy” from @marco-giancotti.bsky.social.
###
As we embrace experience, we might send critical birthday greetings to Herbert Marcuse; he was born on this date in 1898. A philosopher, social critic, and political theorist associated with the Frankfurt School of critical theory, he critiqued capitalism, modern technology, Soviet Communism, and popular culture, arguing that they represent new forms of social control. Best-known for Eros and Civilization (1955) and One-Dimensional Man (1964), he is considered “the Father of the New Left.”
To the degree to which they correspond to the given reality, thought and behavior express a false consciousness, responding to and contributing to the preservation of a false order of facts. And this false consciousness has become embodied in the prevailing technical apparatus which in turn reproduces it.
– Marcuse
Written by (Roughly) Daily
July 19, 2025 at 1:00 am
Posted in Uncategorized
Tagged with autopoiesis, cognition, consciousness, culture, embodiment, experience, Frankfort School, Herbert Marcuse, history, holism, machine metaphor, Marcuse, memory, New Left, philosophy, Psychology, Science, social criticism, Technology
“The great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing.”*…

Data storyteller RJ Andrews demonstrates…
How do creatives – composers, painters, writers, scientists, philosophers – find the time to produce their opus?
Each routine day is represented as a continuous 24 hour cycle. Midnight is placed at 12 o’clock position and noon at 6 o’clock. Colors mark major categories of activity including work, sleep, and exercise…
The daily rituals of great creators: “Creative Routines,” from @infowetrust.
* Mason Curry, Daily Rituals: How Artists Work (the source of much of the data that informed the gaphics above)
###
As we contemplate cultivating customs, we might send learned birthday greetings to Desiderius Erasmus Roterodamus, better known simply as Erasmus; he was born on this date in 1466. A Catholic priest, social critic, teacher, translator, and theologian, probably best remembered for his book In Praise of Folly, he was the greatest scholar of the northern Renaissance, the first editor of the New Testament (“Do unto others…”), and an important figure in patristics and classical literature.
Erasmus had contrasting experiences of routine: on being orphaned, Erasmus was sent to a series of monastic or semi-monastic schools, which he despised both for their discipline and for their disdain of inquiry. Graduating with few prospects, he joined an Augustinian monastery where he considered his superiors “barbarians” discouraging his classical studies. On ordination, he escaped– and began a career that began in struggle (as he balanced the demands of study with those of serving as a Clerk, and Priest, a tutor… all while trying to distinguish himself as a poet.
His luck changed in 1499, when he connected with a reformist English circle (notably John Colet and Thomas More), then with radical French Franciscan Jean Vitrier, and later with the Aldine New Academy in Venice… which led, in the Reformation, to his emergence as a prime influencer of European thought. Among fellow scholars and philosophers of that era he was– and is still– known as the “Prince of the Humanists.”
Erasmus’ views were contentious on his time and elicited a good bit of criticism. While we don’t know too much about his daily routine, we do know that for the last half of his life it included enough time on a regular basis to read these attacks and to prepare and publish apologetic works in his own defense, in many cases leading to a long series of back-and-forth polemical books… kind of like Twitter feud, but at the speed of Gutenberg.

Portrait of Erasmus of Rotterdam (1523) by Hans Holbein the Younger
Written by (Roughly) Daily
October 28, 2024 at 1:00 am
Posted in Uncategorized
Tagged with artists, creatives, creators, culture, data storytelling, data visualization, Erasmus, habit, history, history of ideas, Humanism, literature, philosophy, Reformation, ritual, routine, satire, social criticism, theology, Thomas More
“We live, in fact, in a world starved for solitude, silence, and private: and therefore starved for meditation and true friendship”*…
… if then, even more so now. Ben Tarnoff takes off from Lowry Pressly‘s new book to ponder why privacy matters and why we have such trouble even thinking about how to protect it…
… Today, it is harder to keep one’s mind in place. Our thoughts leak through the sieve of our smartphones, where they join the great river of everyone else’s. The consequences, for both our personal and collective lives, are much discussed: How can we safeguard our privacy against state and corporate surveillance? Is Instagram making teen-agers depressed? Is our attention span shrinking?
There is no doubt that an omnipresent Internet connection, and the attendant computerization of everything, is inducing profound changes. Yet the conversation that has sprung up around these changes can sometimes feel a little predictable. The same themes and phrases tend to reappear. As the Internet and the companies that control it have become an object of permanent public concern, the concerns themselves have calcified into clichés. There is an algorithmic quality to our grievances with algorithmic life.
Lowry Pressly’s new book, “The Right to Oblivion: Privacy and the Good Life,” defies this pattern. It is a radiantly original contribution to a conversation gravely in need of new thinking. Pressly, who teaches political science at Stanford, takes up familiar fixations of tech discourse—privacy, mental health, civic strife—but puts them into such a new and surprising arrangement that they are nearly unrecognizable. The effect is like walking through your home town after a tornado: you recognize the buildings, but after some vigorous jumbling they have acquired a very different shape.
Pressly trained as a philosopher, and he has a philosopher’s fondness for sniffing out unspoken assumptions. He finds one that he considers fundamental to our networked era: “the idea that information has a natural existence in human affairs, and that there are no aspects of human life which cannot be translated somehow into data.” This belief, which he calls the “ideology of information,” has an obvious instrumental value to companies whose business models depend on the mass production of data, and to government agencies whose machinery of monitoring and repression rely on the same.
But Pressly also sees the ideology of information lurking in a less likely place—among privacy advocates trying to defend us from digital intrusions. This is because the standard view of privacy assumes there is “some information that already exists,” and what matters is keeping it out of the wrong hands. Such an assumption, for Pressly, is fatal. It “misses privacy’s true value and unwittingly aids the forces it takes itself to be resisting,” he writes. To be clear, Pressly is not opposed to reforms that would give us more power over our data—but it is a mistake “to think that this is what privacy is for.” “Privacy is valuable not because it empowers us to exercise control over our information,” he argues, “but because it protects against the creation of such information in the first place.”
If this idea sounds intriguing but exotic, you may be surprised to learn how common it once was. “A sense that privacy is fundamentally opposed to information has animated public moral discourse on the subject since the very beginning,” Pressly writes…
[Tarnoff recaps Pressly’s a brief history of the technologies that changed our relationship to information, from Kodak through CCTV, to AI…]
… The reason that Pressly feels so strongly about imposing limits on datafication is not only because of the many ways that data can be used to damage us. It is also because, in his view, we lose something precious when we become information, regardless of how it is used. In the very moment when data are made, Pressly believes, a line is crossed. “Oblivion” is his word for what lies on the other side.
Oblivion is a realm of ambiguity and potential. It is fluid, formless, and opaque. A secret is an unknown that can become known. Oblivion, by contrast, is unknowable: it holds those varieties of human experience which are “essentially resistant to articulation and discovery.” It is also a place beyond “deliberate, rational control,” where we lose ourselves or, as Pressly puts it, “come apart.” Sex and sleep are two of the examples he provides. Both bring us into the “unaccountable regions of the self,” those depths at which our ego dissolves and about which it is difficult to speak in definite terms. Physical intimacy is hard to render in words—“The experience is deflated by description,” Pressly observes—and the same is notoriously true of the dreams we have while sleeping, which we struggle to narrate, or even to remember, on waking.
Oblivion is fragile, however. When it comes into contact with information, it disappears. This is why we need privacy: it is the protective barrier that keeps oblivion safe from information. Such protection insures that “one can actually enter into oblivion from time to time, and that it will form a reliably available part of the structure of one’s society.”
But why do we need to enter into oblivion from time to time, and what good does it do us? Pressly gives a long list of answers, drawn not only from the Victorians but also from the work of Michel Foucault, Roland Barthes, Gay Talese, Jorge Luis Borges, and Hannah Arendt. One is that oblivion is restorative: we come apart in order to come back together. (Sleep is a case in point; without a nightly suspension of our rational faculties, we go nuts.) Another is the notion that oblivion is integral to the possibility of personal evolution. “The main interest in life and work is to become someone else that you were not in the beginning,” Foucault writes. To do so, however, you must believe that the future can be different from the past—a belief that becomes harder to sustain when one is besieged by information, as the obsessive documentation of life makes it “more fixed, more factual, with less ambiguity and life-giving potentiality.” Oblivion, by setting aside a space for forgetting, offers a refuge from this “excess of memory,” and thus a standpoint from which to imagine alternative futures.
Oblivion is also essential for human dignity. Because we cannot be fully known, we cannot be fully instrumentalized. Immanuel Kant urged us to treat others as ends in themselves, not merely as means. For Pressly, our obscurities are precisely what endow us with a sense of value that exceeds our usefulness. This, in turn, helps assure us that life is worth living, and that our fellow human beings are worthy of our trust. “There can be no trust of any sort without some limits to knowledge,” Pressly writes…
… Psychoanalysis first emerged in the late nineteenth century, in parallel with the idea of privacy. This was a period when the boundary between public and private was being redrawn, not only with the onslaught of handheld cameras but also, more broadly, because of the dislocating forces of what historians call the Second Industrial Revolution. Urbanization pulled workers from the countryside and packed them into cities, while mass production meant they could buy (rather than make) most of what they needed. These developments weakened the institution of the family, which lost its primacy as people fled rural kin networks and the production of life’s necessities moved from the household to the factory.
In response, a new freedom appeared. For the first time, the historian Eli Zaretsky observes, “personal identity became a problem and a project for individuals.” If you didn’t have your family to tell you who you were, you had to figure it out yourself. Psychoanalysis helped the moderns to make sense of this question, and to try to arrive at an answer.
More than a century later, the situation looks different. If an earlier stage of capitalism laid the material foundations for a new experience of individuality, the present stage seems to be producing the opposite. In their taverns, theatres, and dance halls, the city dwellers of the Second Industrial Revolution created a culture of social and sexual experimentation. Today’s young people are lonely and sexless. At least part of the reason is the permanent connectivity that, as Pressly argues, conveys the feeling that “one’s time and attention—that is to say, one’s life—are not entirely one’s own.”
The modernist city promised anonymity, reinvention. The Internet is devoid of such pleasures. It is more like a village: a place where your identity is fixed. Online, we are the sum of what we have searched, clicked, liked, and bought. But there are futures beyond those predicted through statistical extrapolations from the present. In fact, the past is filled with the arrival of such futures: those blind corners when no amount of information could tell you what was coming. History has a habit of humbling its participants. Somewhere in its strange rhythms sits the lifelong work of making a life of one’s own…
We often want to keep some information to ourselves. But information itself may be the problem: “What Is Privacy For?” from @bentarnoff in @NewYorker. (Possible paywall; archived link here.)
Pair with the two (marvelous, provocative) documentary series from Adam Curtis and the BBC: The Century of Self and Hypernormalization, both of which are available on You Tube.)
* C. S. Lewis
###
As we make room, we might send painfully-observant birthday greetings to Lenny Bruce; he was born on this date in 1925. A comedian, social critic, and satirist, he was ranked (in a 2017 Roling Stone poll) the third best stand-up comic of all time– behind Richard Pryor and George Carlin, both of whom credit Bruce as an influence.
Written by (Roughly) Daily
October 13, 2024 at 1:00 am
Posted in Uncategorized
Tagged with comedy, community, creativity, culture, dignity, history, information, Kant, Lenny Bruce, oblivion, personality, philosophy, privacy, satire, social criticism, surveillance, Technology





You must be logged in to post a comment.