(Roughly) Daily

Posts Tagged ‘Bernard Stiegler

“I tend to think that most fears about A.I. are best understood as fears about capitalism”*…

Further to Wednesday‘s and yesterday‘s posts (on to other topics again after this, I promise), a powerful piece from Patrick Tanguay (in his always-illuminating Sentiers newsletter).

He begins with a consideration of Peter Wolfendale’s “Geist in the machine

… Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we”… end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open:

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account…

Tanguay then turns to “The Prospect of Butlerian Jihad” by Liam Mullally, in which Mullally uses…

… Herbert’s Dune and the Butlerian Jihad [here] as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. [see here] If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons:

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.

As Max Read (scroll down) observes:

… if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary…

The question isn’t if we want a relationship with technology; it’s what kind of relationship we want. We’ve always (at least since we’ve been a conscious species) co-existed with, and been shaped by, tools; we’ve always suffered the “friction” of technological transition as we innovate new tools. As yesterday’s post suggested (in its defense of the open web in the face on a voracious attack from powerful LLM companies), “what matters is power“… power to shape the relationship(s) we have with the technologies we use. That power is currently in the hands of a relatively few companies, all concerned above all else with harvesting as much money as they can from “uses” they design to amplify that engagement and ease that monetization. It doesn’t, of course, have to be this way.

We’ve lived under modern capitalism for only a few hundred years, and under the hyper-global, hyper-extractive regime we currently inhabit for only a century-and-a-half or so, during which time, in fits and starts, it has grown ever more rapcious. George Monbiot observed that “like coal, capitalism has brought many benefits. But, like coal, it now causes more harm than good.” And Ursula Le Guin, that “we live in capitalism. Its power seems inescapable. So did the divine right of kings.” In many countries, “divine right” monarchy has been replaced by “constitutional monarchy.” Perhaps it’s time for more of the world to consider “constitutional capitalism.” We could start by learning from the successes and failures of Scandinavia and Europe.

Social media, AI, quantum computing– on being clear as to the real issue: “Geist in the machine & The prospect of Butlerian Jihad,” from @inevernu.bsky.social.

Apposite: “The enclosure of the commons inaugurates a new ecological order. Enclosure did not just physically transfer the control over grasslands from the peasants to the lord. It marked a radical change in the attitudes of society toward the environment.”

(All this said, David Chalmers argues that there’s one possibility that might change everything: “Could a Large Language Model be Conscious?” On the other hand, the ARC Prize Foundation suggests, we have some time: a test they devised for benchmarking agentic intelligence recently found that “humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%”… :)

Ted Chiang (gift article; see also here and here and here)

###

As we keep our eyes on the prize, we might spare a thought for a man who wrestled with a version of these same issues in the last century, Pierre Teilhard de Chardin; he died on this date in 1955.  A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere.  Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory.  His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.

source

“Men have become the tools of their tools”*…

Visionary philosopher Bernard Stiegler argued that it’s not our technology that makes humans special; rather, it’s our relationship with that technology. Bryan Norton explains…

It has become almost impossible to separate the effects of digital technologies from our everyday experiences. Reality is parsed through glowing screens, unending data feeds, biometric feedback loops, digital protheses and expanding networks that link our virtual selves to satellite arrays in geostationary orbit. Wristwatches interpret our physical condition by counting steps and heartbeats. Phones track how we spend our time online, map the geographic location of the places we visit and record our histories in digital archives. Social media platforms forge alliances and create new political possibilities. And vast wireless networks – connecting satellites, drones and ‘smart’ weapons – determine how the wars of our era are being waged. Our experiences of the world are soaked with digital technologies.

But for the French philosopher Bernard Stiegler, one of the earliest and foremost theorists of our digital age, understanding the world requires us to move beyond the standard view of technology. Stiegler believed that technology is not just about the effects of digital tools and the ways that they impact our lives. It is not just about how devices are created and wielded by powerful organisations, nation-states or individuals. Our relationship with technology is about something deeper and more fundamental. It is about technics.

According to Stiegler, technics – the making and use of technology, in the broadest sense – is what makes us human. Our unique way of existing in the world, as distinct from other species, is defined by the experiences and knowledge our tools make possible, whether that is a state-of-the-art brain-computer interface such as Neuralink, or a prehistoric flint axe used to clear a forest. But don’t be mistaken: ‘technics’ is not simply another word for ‘technology’. As Martin Heidegger wrote in his essay ‘The Question Concerning Technology’ (1954), which used the German term Technik instead of Technologie in the original title: the ‘essence of technology is by no means anything technological.’ This aligns with the history of the word: the etymology of ‘technics’ leads us back to something like the ancient Greek term for art – technē. The essence of technology, then, is not found in a device, such as the one you are using to read this essay. It is an open-ended creative process, a relationship with our tools and the world.

This is Stiegler’s legacy. Throughout his life, he took this idea of technics, first explored while he was imprisoned for armed robbery, further than anyone else. But his ideas have often been overlooked and misunderstood, even before he died in 2020. Today, they are more necessary than ever. How else can we learn to disentangle the effects of digital technologies from our everyday experiences? How else can we begin to grasp the history of our strange reality?…

[Norton unspools Stiegler’s remarkable life and the development of his thought…]

… Technology, for better or worse, affects every aspect of our lives. Our very sense of who we are is shaped and reshaped by the tools we have at our disposal. The problem, for Stiegler, is that when we pay too much attention to our tools, rather than how they are developed and deployed, we fail to understand our reality. We become trapped, merely describing the technological world on its own terms and making it even harder to untangle the effects of digital technologies and our everyday experiences. By encouraging us to pay closer attention to this world-making capacity, with its potential to harm and heal, Stiegler is showing us what else is possible. There are other ways of living, of being, of evolving. It is technics, not technology, that will give the future its new face…

Eminently worth reading in full: “Our tools shape our selves,” from @br_norton in @aeonmag.

Compare and contrast: Kevin Kelly‘s What Technology Wants

* Henry David Thoreau

###

As we own up, we might send phenomenological birthday greetings to Immanuel Kant; he was born on this date in 1724.  One of the central figures of modern philosophy, Kant is remembered primarily for his efforts to unite reason with experience (e.g., Critique of Pure Reason [Kritik der reinen Vernunft], 1781), and for his work on ethics (e.g., Metaphysics of Morals [Die Metaphysik der Sitten], 1797) and aesthetics (e.g., Critique of Judgment [Kritik der Urteilskraft], 1790).  

But Kant made important contributions to mathematics and astronomy. For example: his argument that mathematical truths are a form of synthetic a priori knowledge was cited by Einstein as an important early influence on his work.  And his description of the Milky Way as a lens-shaped collection of stars that represented only one of many “island universes,” was later shown to be accurate by Herschel.

Act so as to treat humanity, whether in your own person or in that of another, at all times also as an end, and not only as a means.

Metaphysic of Morals

 source