(Roughly) Daily

Posts Tagged ‘Kant

“I tend to think that most fears about A.I. are best understood as fears about capitalism”*…

Further to Wednesday‘s and yesterday‘s posts (on to other topics again after this, I promise), a powerful piece from Patrick Tanguay (in his always-illuminating Sentiers newsletter).

He begins with a consideration of Peter Wolfendale’s “Geist in the machine

… Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we”… end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open:

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account…

Tanguay then turns to “The Prospect of Butlerian Jihad” by Liam Mullally, in which Mullally uses…

… Herbert’s Dune and the Butlerian Jihad [here] as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. [see here] If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons:

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.

As Max Read (scroll down) observes:

… if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary…

The question isn’t if we want a relationship with technology; it’s what kind of relationship we want. We’ve always (at least since we’ve been a conscious species) co-existed with, and been shaped by, tools; we’ve always suffered the “friction” of technological transition as we innovate new tools. As yesterday’s post suggested (in its defense of the open web in the face on a voracious attack from powerful LLM companies), “what matters is power“… power to shape the relationship(s) we have with the technologies we use. That power is currently in the hands of a relatively few companies, all concerned above all else with harvesting as much money as they can from “uses” they design to amplify that engagement and ease that monetization. It doesn’t, of course, have to be this way.

We’ve lived under modern capitalism for only a few hundred years, and under the hyper-global, hyper-extractive regime we currently inhabit for only a century-and-a-half or so, during which time, in fits and starts, it has grown ever more rapcious. George Monbiot observed that “like coal, capitalism has brought many benefits. But, like coal, it now causes more harm than good.” And Ursula Le Guin, that “we live in capitalism. Its power seems inescapable. So did the divine right of kings.” In many countries, “divine right” monarchy has been replaced by “constitutional monarchy.” Perhaps it’s time for more of the world to consider “constitutional capitalism.” We could start by learning from the successes and failures of Scandinavia and Europe.

Social media, AI, quantum computing– on being clear as to the real issue: “Geist in the machine & The prospect of Butlerian Jihad,” from @inevernu.bsky.social.

Apposite: “The enclosure of the commons inaugurates a new ecological order. Enclosure did not just physically transfer the control over grasslands from the peasants to the lord. It marked a radical change in the attitudes of society toward the environment.”

(All this said, David Chalmers argues that there’s one possibility that might change everything: “Could a Large Language Model be Conscious?” On the other hand, the ARC Prize Foundation suggests, we have some time: a test they devised for benchmarking agentic intelligence recently found that “humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%”… :)

Ted Chiang (gift article; see also here and here and here)

###

As we keep our eyes on the prize, we might spare a thought for a man who wrestled with a version of these same issues in the last century, Pierre Teilhard de Chardin; he died on this date in 1955.  A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere.  Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory.  His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.

source

“Show, don’t tell”*…

Illustration depicting two stick figures, one in red and the other in blue, both saying 'hi' amidst explosive lines, with the title 'The Ozma Problem' displayed above.

Some things are very difficult to explain using words alone; they require physical demonstration. Consider, for example, the distinction between right and left. It turns out that this difficulty has been at the heart of the great scientific debates about the nature of space…

… explain right and left to a friend using language alone and without using the words right and left. As you can only use language, you can’t show your hands or use pictures!

It’s tricky, isn’t it? The difference between right and left isn’t as straightforward as it seems. If we dig a little deeper, we will find that the science behind right and left is surprising, complex, and profound.

How can two things be identical yet different at the same time? This was the question that puzzled one of humankind’s greatest thinkers, Immanuel Kant.

Many of the great debates of the Scientific Revolution during the 16th and 17th centuries concerned the nature of space. The English polymath Sir Isaac Newton proposed that space was absolute: space is an entity in itself and exists even without objects, matter, or living beings filling it. 

In contrast, Gottfried Leibniz, Newton’s bitter rival, argued that space was relational: it only existed because of the relations between the objects that fill it. If objects do not exist, then space doesn’t either.

Meanwhile, Immanuel Kant used handedness to give his two cents. He asked us to imagine a solitary hand floating in an otherwise completely empty space. The hand must either be a right hand or a left hand, and this will be the case even in a space where no relationships between objects can be observed. Kant noted that our hands are geometrically and mathematically identical in every way possible, whether it be the lengths of the fingers or the angles between them. Yet, the one fundamental difference between them—that one is a right hand, and the other is a left hand—exists in itself; it is intrinsic to the hand and not related to any other object, similar to space itself. Space has an absolute property.

Ultimately, Kant’s theories of handedness were not foolproof and could not be used to prove that space is absolute. Indeed, Kant would switch between the Newtonian and Leibnizian schools of thought during his lifetime. However, Kant did show just how puzzling and difficult it is to explain why right hands and left hands are identical but different. That intrinsic quality of handedness is almost impossible to explain without showing, and this is the root of the Ozma Problem.

In 1960, Project Ozma was launched in West Virginia. Named after the ruler of the fictional Land of Oz, Project Ozma was a huge telescope that listened for signals from space, signals that could be proof of extraterrestrial intelligence. Unfortunately, the project only ran for a few months, and it had no major success.

Let’s say the telescope had picked up these signals. How would we on Earth respond? We would need to convert their signals, after which we would send our own. Telescopes and computers use binary code. And directionality is crucial to understanding binary, as it is read left to right and decoded right to left. So, if we are sending binary signals to aliens, we need to be sure they understand which direction is left and which is right. How can we be sure they share our understanding of directions?

This is the Ozma Problem, a thought experiment first described by Martin Gardner [see the almanac entry here] in his 1964 book, The Ambidextrous Universe. In this book, Gardner pitched a number of solutions.

Before going into Gardner’s work, here’s a seemingly simple solution: lay your palms face down on a table and equally spaced from your body. The thumb that’s closer to your heart? That’s the left side. The right side is defined by the thumb farther away from the heart.

Another potential solution would be to use north and south as reference points: when facing north, everything towards east is the right side, and everything pointing west is the left side.

The problem with these solutions is that they both rely on a shared point of reference, like the direction of north-south-east-west and the location of the heart. In no way can we be certain that an alien species would share these!

Some of the solutions that Gardner proposed in his book use magnetic fields, planetary rotation, and the direction of current flow. And as we discussed before, they all fail because of the need for a shared point of reference. 

So, after centuries of wondering whether we are alone in the universe, we finally make contact with an alien species, only to find that our inability to explain something as mundane as right and left precludes meaningful dialogue. The Ozma Problem demonstrates the limits of our language, and it challenges anthropocentrism, which is the notion that human beings and our experiences are the center of the universe.

Many thought problems are hypothetical and can’t be solved, but the Ozma Problem does have a solution. In fact, the solution already existed when Gardner first described it. But it’s not immediately associated with right-left asymmetry or aliens.

 

While we cannot be sure that aliens share our anatomy or our perception of north-south-east-west, if they inhabit the same universe as us, we can assume the fundamental forces of physics apply to them too.

There are four fundamental forces of physics: gravity, electromagnetism, strong nuclear forces (the force that binds atomic nuclei together), and weak nuclear forces (the force that causes atomic decay).

Up until 1956, it was assumed these fundamental forces all display parity. Parity is an important concept in physics, and it can be demonstrated visually by using a mirror. If we stand in front of a mirror holding an apple in our right hand and then drop it, the reflection will show it falling to the ground, but the apple will fall from your left hand. Gravity still works in the reflection. Likewise, if we look at the strong forces binding atomic nuclei and then observe them in a mirror, the images would be identical, just with right and left switched. 

But in 1956, Professor Chien-Shiung Wu, a physicist, conducted a ground breaking experiment. She was able to prove that the weak nuclear force—the decay of atoms—did not always demonstrate parity. The weak nuclear force does not adhere to mirror symmetry. 

Professor Wu showed this by observing the decay of cobalt-60 atoms. When atoms decay, they spin out electrons. Up until then, scientists had always observed these electrons spinning out equally in all directions. But Professor Wu saw that cobalt-60 will always preferentially spin out electrons in a certain direction. In other words, the movement is asymmetric. For some reason, the decay of atoms is the one fundamental force that does not adhere to parity or mirror symmetry, thus showing that directionality is intrinsic to the universe, just as Kant had postulated in the 18th century. 

For the first time in history, it was proven that nature can prefer one direction. Very soon after Wu’s findings, physicists were able to prove that elementary particles known as neutrinos always spin towards the left.

What does this mean for our communication with aliens? If the aliens can replicate Professor Wu’s experiment and visualize the spin of electrons while cobalt-60 decays, they can orient right and left!

Ironically, Professor Wu was not afforded any sort of parity herself during her working life. Other scientists were recognized for research that could not have been achieved without hers. Today, the weak force remains one of the most important and mysterious topics in physics today, thanks to Professor Wu.

So, if the only way to scientifically and definitively define the difference between right and left is to build a particle accelerator and observe the decay of cobalt-60, clearly the difference is not as straightforward as it may first seem! The Ozma Problem is proof that the most mundane concepts are sometimes directly linked to the cosmos and speak to the nature of existence itself…

An essay by Dr. Maloy Das (see the bio in this unrelated– but also fascinating– article by him). From the remarkable blog, Fascinating World, scored a highly credible source by the MBFC for having proper sourcing, no failed fact-checks, and “highly factual” reporting. It’s the work of Krishna Rathuryan, currently a senior at a prep school in Princeton (where he’s also apparently a pretty accomplished distance runner) and team of his friends.

When language fails: “What Is The Ozma Problem, And Why Does It Matter?

* attributed to playwright Anton Chekhov, who said said “Don’t tell me the moon is shining; show me the glint of light on broken glass.” It has, of course, become a motto for many writers across genre.

###

As we explore explanation, we (especially any readers in or near Manhattan Beach, California) might note that today is one of the two days of the year (symmetrically on either side of the winter solstice, 37 days before and 37 after) when the public sculpture there, “Light Gate,” becomes a portal “unlocked” by the rays of the setting sun… as Atlas Obscura puts it, “a bit of Druidic paganism by way of high modern design.”

A colorful public sculpture featuring a circular frame with vibrant glass panels reflecting the sunset.

source

“We live, in fact, in a world starved for solitude, silence, and private: and therefore starved for meditation and true friendship”*…

… if then, even more so now. Ben Tarnoff takes off from Lowry Pressly‘s new book to ponder why privacy matters and why we have such trouble even thinking about how to protect it…

… Today, it is harder to keep one’s mind in place. Our thoughts leak through the sieve of our smartphones, where they join the great river of everyone else’s. The consequences, for both our personal and collective lives, are much discussed: How can we safeguard our privacy against state and corporate surveillance? Is Instagram making teen-agers depressed? Is our attention span shrinking?

There is no doubt that an omnipresent Internet connection, and the attendant computerization of everything, is inducing profound changes. Yet the conversation that has sprung up around these changes can sometimes feel a little predictable. The same themes and phrases tend to reappear. As the Internet and the companies that control it have become an object of permanent public concern, the concerns themselves have calcified into clichés. There is an algorithmic quality to our grievances with algorithmic life.

Lowry Pressly’s new book, “The Right to Oblivion: Privacy and the Good Life,” defies this pattern. It is a radiantly original contribution to a conversation gravely in need of new thinking. Pressly, who teaches political science at Stanford, takes up familiar fixations of tech discourse—privacy, mental health, civic strife—but puts them into such a new and surprising arrangement that they are nearly unrecognizable. The effect is like walking through your home town after a tornado: you recognize the buildings, but after some vigorous jumbling they have acquired a very different shape.

Pressly trained as a philosopher, and he has a philosopher’s fondness for sniffing out unspoken assumptions. He finds one that he considers fundamental to our networked era: “the idea that information has a natural existence in human affairs, and that there are no aspects of human life which cannot be translated somehow into data.” This belief, which he calls the “ideology of information,” has an obvious instrumental value to companies whose business models depend on the mass production of data, and to government agencies whose machinery of monitoring and repression rely on the same.

But Pressly also sees the ideology of information lurking in a less likely place—among privacy advocates trying to defend us from digital intrusions. This is because the standard view of privacy assumes there is “some information that already exists,” and what matters is keeping it out of the wrong hands. Such an assumption, for Pressly, is fatal. It “misses privacy’s true value and unwittingly aids the forces it takes itself to be resisting,” he writes. To be clear, Pressly is not opposed to reforms that would give us more power over our data—but it is a mistake “to think that this is what privacy is for.” “Privacy is valuable not because it empowers us to exercise control over our information,” he argues, “but because it protects against the creation of such information in the first place.”

If this idea sounds intriguing but exotic, you may be surprised to learn how common it once was. “A sense that privacy is fundamentally opposed to information has animated public moral discourse on the subject since the very beginning,” Pressly writes…

[Tarnoff recaps Pressly’s a brief history of the technologies that changed our relationship to information, from Kodak through CCTV, to AI…]

… The reason that Pressly feels so strongly about imposing limits on datafication is not only because of the many ways that data can be used to damage us. It is also because, in his view, we lose something precious when we become information, regardless of how it is used. In the very moment when data are made, Pressly believes, a line is crossed. “Oblivion” is his word for what lies on the other side.

Oblivion is a realm of ambiguity and potential. It is fluid, formless, and opaque. A secret is an unknown that can become known. Oblivion, by contrast, is unknowable: it holds those varieties of human experience which are “essentially resistant to articulation and discovery.” It is also a place beyond “deliberate, rational control,” where we lose ourselves or, as Pressly puts it, “come apart.” Sex and sleep are two of the examples he provides. Both bring us into the “unaccountable regions of the self,” those depths at which our ego dissolves and about which it is difficult to speak in definite terms. Physical intimacy is hard to render in words—“The experience is deflated by description,” Pressly observes—and the same is notoriously true of the dreams we have while sleeping, which we struggle to narrate, or even to remember, on waking.

Oblivion is fragile, however. When it comes into contact with information, it disappears. This is why we need privacy: it is the protective barrier that keeps oblivion safe from information. Such protection insures that “one can actually enter into oblivion from time to time, and that it will form a reliably available part of the structure of one’s society.”

But why do we need to enter into oblivion from time to time, and what good does it do us? Pressly gives a long list of answers, drawn not only from the Victorians but also from the work of Michel Foucault, Roland Barthes, Gay Talese, Jorge Luis Borges, and Hannah Arendt. One is that oblivion is restorative: we come apart in order to come back together. (Sleep is a case in point; without a nightly suspension of our rational faculties, we go nuts.) Another is the notion that oblivion is integral to the possibility of personal evolution. “The main interest in life and work is to become someone else that you were not in the beginning,” Foucault writes. To do so, however, you must believe that the future can be different from the past—a belief that becomes harder to sustain when one is besieged by information, as the obsessive documentation of life makes it “more fixed, more factual, with less ambiguity and life-giving potentiality.” Oblivion, by setting aside a space for forgetting, offers a refuge from this “excess of memory,” and thus a standpoint from which to imagine alternative futures.

Oblivion is also essential for human dignity. Because we cannot be fully known, we cannot be fully instrumentalized. Immanuel Kant urged us to treat others as ends in themselves, not merely as means. For Pressly, our obscurities are precisely what endow us with a sense of value that exceeds our usefulness. This, in turn, helps assure us that life is worth living, and that our fellow human beings are worthy of our trust. “There can be no trust of any sort without some limits to knowledge,” Pressly writes…

… Psychoanalysis first emerged in the late nineteenth century, in parallel with the idea of privacy. This was a period when the boundary between public and private was being redrawn, not only with the onslaught of handheld cameras but also, more broadly, because of the dislocating forces of what historians call the Second Industrial Revolution. Urbanization pulled workers from the countryside and packed them into cities, while mass production meant they could buy (rather than make) most of what they needed. These developments weakened the institution of the family, which lost its primacy as people fled rural kin networks and the production of life’s necessities moved from the household to the factory.

In response, a new freedom appeared. For the first time, the historian Eli Zaretsky observes, “personal identity became a problem and a project for individuals.” If you didn’t have your family to tell you who you were, you had to figure it out yourself. Psychoanalysis helped the moderns to make sense of this question, and to try to arrive at an answer.

More than a century later, the situation looks different. If an earlier stage of capitalism laid the material foundations for a new experience of individuality, the present stage seems to be producing the opposite. In their taverns, theatres, and dance halls, the city dwellers of the Second Industrial Revolution created a culture of social and sexual experimentation. Today’s young people are lonely and sexless. At least part of the reason is the permanent connectivity that, as Pressly argues, conveys the feeling that “one’s time and attention—that is to say, one’s life—are not entirely one’s own.”

The modernist city promised anonymity, reinvention. The Internet is devoid of such pleasures. It is more like a village: a place where your identity is fixed. Online, we are the sum of what we have searched, clicked, liked, and bought. But there are futures beyond those predicted through statistical extrapolations from the present. In fact, the past is filled with the arrival of such futures: those blind corners when no amount of information could tell you what was coming. History has a habit of humbling its participants. Somewhere in its strange rhythms sits the lifelong work of making a life of one’s own…

We often want to keep some information to ourselves. But information itself may be the problem: “What Is Privacy For?” from @bentarnoff in @NewYorker. (Possible paywall; archived link here.)

Pair with the two (marvelous, provocative) documentary series from Adam Curtis and the BBC: The Century of Self and Hypernormalization, both of which are available on You Tube.)

* C. S. Lewis

###

As we make room, we might send painfully-observant birthday greetings to Lenny Bruce; he was born on this date in 1925. A comedian, social critic, and satirist, he was ranked (in a 2017 Roling Stone poll) the third best stand-up comic of all time– behind Richard Pryor and George Carlin, both of whom credit Bruce as an influence.

source

Written by (Roughly) Daily

October 13, 2024 at 1:00 am

“Few people have the imagination for reality”*…

Experiments that test physics and philosophy as “a single whole,” Amanda Gefter suggests, may be our only route to surefire knowledge about the universe…

Metaphysics is the branch of philosophy that deals in the deep scaffolding of the world: the nature of space, time, causation and existence, the foundations of reality itself. It’s generally considered untestable, since metaphysical assumptions underlie all our efforts to conduct tests and interpret results. Those assumptions usually go unspoken.

Most of the time, that’s fine. Intuitions we have about the way the world works rarely conflict with our everyday experience. At speeds far slower than the speed of light or at scales far larger than the quantum one, we can, for instance, assume that objects have definite features independent of our measurements, that we all share a universal space and time, that a fact for one of us is a fact for all. As long as our philosophy works, it lurks undetected in the background, leading us to mistakenly believe that science is something separable from metaphysics.

But at the uncharted edges of experience — at high speeds and tiny scales — those intuitions cease to serve us, making it impossible for us to do science without confronting our philosophical assumptions head-on. Suddenly we find ourselves in a place where science and philosophy can no longer be neatly distinguished. A place, according to the physicist Eric Cavalcanti, called “experimental metaphysics.”

Cavalcanti is carrying the torch of a tradition that stretches back through a long line of rebellious thinkers who have resisted the usual dividing lines between physics and philosophy. In experimental metaphysics, the tools of science can be used to test our philosophical worldviews, which in turn can be used to better understand science. Cavalcanti, a 46-year-old native of Brazil who is a professor at Griffith University in Brisbane, Australia, and his colleagues have published the strongest result attained in experimental metaphysics yet, a theorem that places strict and surprising constraints on the nature of reality. They’re now designing clever, if controversial, experiments to test our assumptions not only about physics, but about the mind.

While we might expect the injection of philosophy into science to result in something less scientific, in fact, says Cavalcanti, the opposite is true. “In some sense, the knowledge that we obtain through experimental metaphysics is more secure and more scientific,” he said, because it vets not only our scientific hypotheses but the premises that usually lie hidden beneath…

Gefter traces the history of this integrative train of thought (Kant, Duhem, Poincaré, Popper, Einstein, Bell), its potential for helping understand quantum theory… and the prospect of harnessing AI to run the necessary experiments– seemingly comlex and intensive beyond the scope of currenT experimental techniques…

Cavalcanti… is holding out hope. We may never be able to run the experiment on a human, he says, but why not an artificial intelligence algorithm? In his newest work, along with the physicist Howard Wiseman and the mathematician Eleanor Rieffel, he argues that the friend could be an AI algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab. “At some point,” Cavalcanti contends, “we’ll have artificial intelligence that will be essentially indistinguishable from humans as far as cognitive abilities are concerned,” and we’ll be able to test his inequality once and for all.

But that’s not an uncontroversial assumption. Some philosophers of mind believe in the possibility of strong AI, but certainly not all. Thinkers in what’s known as embodied cognition, for instance, argue against the notion of a disembodied mind, while the enactive approach to cognition grants minds only to living creatures.

All of which leaves physics in an awkward position. We can’t know whether nature violates Cavalcanti’s [theorem] — we can’t know, that is, whether objectivity itself is on the metaphysical chopping block — until we can define what counts as an observer, and figuring that out involves physics, cognitive science and philosophy. The radical space of experimental metaphysics expands to entwine all three of them. To paraphrase Gonseth, perhaps they form a single whole…

‘Metaphysical Experiments’ Probe Our Hidden Assumptions About Reality,” in @QuantaMagazine.

* Johann Wolfgang von Goethe

###

As we examine edges, we might send thoughtful birthday greetings to Rudolf Schottlaender; he was born on this date in 1900. A philosopher who studied with Edmund HusserlMartin HeideggerNicolai Hartmann, and Karl Jaspers, Schottlaender survived the Nazi regime and the persecution of the Jews, hiding in Berlin. After the war, as his democratic and humanist proclivities kept him from posts in philosophy faculties, he distinguished himself as a classical philologist and translator (e.g., new translations of Sophocles which were very effective on the stage, and an edition of Petrarch).

But he continued to publish philosophical and political essays and articles, which he predominantly published in the West and in which he saw himself as a mediator between the systems. Because of his positions critical to East Germany, he was put under close surveillance by the Ministry for State Security (Ministerium für Staatssicherheit or Stasi)– and inspired leading minds of the developing opposition in East Germany.

source

“Men have become the tools of their tools”*…

Visionary philosopher Bernard Stiegler argued that it’s not our technology that makes humans special; rather, it’s our relationship with that technology. Bryan Norton explains…

It has become almost impossible to separate the effects of digital technologies from our everyday experiences. Reality is parsed through glowing screens, unending data feeds, biometric feedback loops, digital protheses and expanding networks that link our virtual selves to satellite arrays in geostationary orbit. Wristwatches interpret our physical condition by counting steps and heartbeats. Phones track how we spend our time online, map the geographic location of the places we visit and record our histories in digital archives. Social media platforms forge alliances and create new political possibilities. And vast wireless networks – connecting satellites, drones and ‘smart’ weapons – determine how the wars of our era are being waged. Our experiences of the world are soaked with digital technologies.

But for the French philosopher Bernard Stiegler, one of the earliest and foremost theorists of our digital age, understanding the world requires us to move beyond the standard view of technology. Stiegler believed that technology is not just about the effects of digital tools and the ways that they impact our lives. It is not just about how devices are created and wielded by powerful organisations, nation-states or individuals. Our relationship with technology is about something deeper and more fundamental. It is about technics.

According to Stiegler, technics – the making and use of technology, in the broadest sense – is what makes us human. Our unique way of existing in the world, as distinct from other species, is defined by the experiences and knowledge our tools make possible, whether that is a state-of-the-art brain-computer interface such as Neuralink, or a prehistoric flint axe used to clear a forest. But don’t be mistaken: ‘technics’ is not simply another word for ‘technology’. As Martin Heidegger wrote in his essay ‘The Question Concerning Technology’ (1954), which used the German term Technik instead of Technologie in the original title: the ‘essence of technology is by no means anything technological.’ This aligns with the history of the word: the etymology of ‘technics’ leads us back to something like the ancient Greek term for art – technē. The essence of technology, then, is not found in a device, such as the one you are using to read this essay. It is an open-ended creative process, a relationship with our tools and the world.

This is Stiegler’s legacy. Throughout his life, he took this idea of technics, first explored while he was imprisoned for armed robbery, further than anyone else. But his ideas have often been overlooked and misunderstood, even before he died in 2020. Today, they are more necessary than ever. How else can we learn to disentangle the effects of digital technologies from our everyday experiences? How else can we begin to grasp the history of our strange reality?…

[Norton unspools Stiegler’s remarkable life and the development of his thought…]

… Technology, for better or worse, affects every aspect of our lives. Our very sense of who we are is shaped and reshaped by the tools we have at our disposal. The problem, for Stiegler, is that when we pay too much attention to our tools, rather than how they are developed and deployed, we fail to understand our reality. We become trapped, merely describing the technological world on its own terms and making it even harder to untangle the effects of digital technologies and our everyday experiences. By encouraging us to pay closer attention to this world-making capacity, with its potential to harm and heal, Stiegler is showing us what else is possible. There are other ways of living, of being, of evolving. It is technics, not technology, that will give the future its new face…

Eminently worth reading in full: “Our tools shape our selves,” from @br_norton in @aeonmag.

Compare and contrast: Kevin Kelly‘s What Technology Wants

* Henry David Thoreau

###

As we own up, we might send phenomenological birthday greetings to Immanuel Kant; he was born on this date in 1724.  One of the central figures of modern philosophy, Kant is remembered primarily for his efforts to unite reason with experience (e.g., Critique of Pure Reason [Kritik der reinen Vernunft], 1781), and for his work on ethics (e.g., Metaphysics of Morals [Die Metaphysik der Sitten], 1797) and aesthetics (e.g., Critique of Judgment [Kritik der Urteilskraft], 1790).  

But Kant made important contributions to mathematics and astronomy. For example: his argument that mathematical truths are a form of synthetic a priori knowledge was cited by Einstein as an important early influence on his work.  And his description of the Milky Way as a lens-shaped collection of stars that represented only one of many “island universes,” was later shown to be accurate by Herschel.

Act so as to treat humanity, whether in your own person or in that of another, at all times also as an end, and not only as a means.

Metaphysic of Morals

 source