(Roughly) Daily

Posts Tagged ‘Kant

“I cannot teach anybody anything. I can only make them think.”*…

Death of Socrates, Jacques-Louis David (source)

Benjamin Ross Hoffman puts “the Socratic Method” into context– important, timely context…

There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency. [or impiety– see here] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk.

Euthyphro’s confidence is striking. His own family thinks it is indecent for a son to prosecute his father; Euthyphro insists that true decency demands it, that he understands what the gods require better than his relatives do. Socrates, who is about to be tried for teaching indecency toward the gods, asks Euthyphro to explain what decency actually is, since Euthyphro claims to know, and Socrates will need such knowledge for his own defense.

Euthyphro’s first answer is: decency is what I am doing right now, prosecuting wrongdoers regardless of kinship. Socrates points out that this is an example, not a definition. There are many decent acts; what makes them all decent?

Euthyphro tries again: decency is what the gods love. But the gods disagree among themselves, Socrates observes, so by this definition the same act could be both decent and indecent. Euthyphro refines: decency is what all the gods love. And here Socrates asks a question Euthyphro cannot answer: do the gods love decent things because they are decent, or are things decent because the gods love them?

If decent things are decent because the gods love them, then decency is arbitrary, a matter of divine whim. Socrates is too polite to say so, but the implication is: if decency is defined by the arbitrary whim of our betters, who are you to prosecute your father?

If the gods love decent things because they are decent, then however we know this, we already know the standard for decency ourselves and can cut out the middleman. But then Euthyphro should be able to explain the standard. He can’t.

Euthyphro tries a few more times, suggesting that decency is a kind of service to the gods, a kind of trade with the gods. Each time Socrates gently follows the definition to its consequences, and each time it collapses. Eventually Euthyphro leaves, saying he is in a hurry. Socrates’ last words are a lament: you have abandoned me without the understanding I needed for my own defense.

This is usually read as a proto-academic dialogue about definitions. It is a scene from a civilization in crisis. A man is about to use the legal system to destroy his own father on the basis of a concept he cannot define, in a courthouse where another man is about to be destroyed by the same concept. And the man who cannot define it is not unusual. He is representative.

The indecency for which Socrates was being prosecuted seems to have consisted of asking just the sort of questions Socrates posed to Euthyphro…

[Hoffman sketches the culture and politics of Athens in the late fifth century, the role of the Sophists, and the (radical) role that Socrates played…]

… Plato also responded to his beloved mentor’s death by founding the Academy, a great house in Athens where philosophical reasoning was taught methodically. We still have our Academics.

Agnes Callard, in her recent book Open Socrates, wants Socrates to be timeless. She strips out the historical situation, strips out the aliveness that preceded the method, and ends up defending a method that’s obviously inapplicable in many of the cases where she claims it applies. Aristarchus did not need his assumptions questioned at random. He needed someone who could ask probing questions about his actual problem, from a perspective that didn’t share his assumptions about what was and wasn’t possible.

Zvi Mowshowitz, in his review of Callard’s book (part 1, part 2), argues at considerable length that the decontextualized version is bad. He is right. Cached beliefs are usually fine. Destabilizing them is usually harmful. Most people do not want to spend their lives in Socratic questioning, and they are right.

But Zvi has written a long polemic in two installments on the winning side of an incredibly lame debate about whether we should anxiously doubt ourselves all the time, responding to Callard’s decontextualized Socrates, not the real one. The real one did not devise a method and then apply it. He had a quality, something the oracle reached for the language of the tragedians to describe. And what was memorialized as a “method” was what happened when that quality met a city where every other participant in public life had stopped being alive.

Socrates invokes timeless considerations like logical coherence, and committing (even provisionally) to specific claims; these are very natural things to try to appeal to when people are being squirmy, dramatic, hard to pin down, and fleeing to abstractions that resist falsification.

Spinoza, in the Theologico-Political Treatise, similarly resituated the teachings of Jesus of Nazareth in their proper context. The political teachings of the Gospels to turn the other cheek, forgive debts, and render unto Caesar what is due to him, are instructions for people living under a hostile and extractive system of domination. Citizens of a free republic have entirely different duties. They have an affirmative obligation to hold each other accountable, to sue people who have wronged them, to participate in collective self-governance. The teachings are not wrong. They are addressed to a specific situation, and become wrong when mechanically transplanted into an inappropriate context.

The reason to recover the historical Socrates is not only accuracy about the distant past; it is that by seeing this relevant aspect of the past more clearly, we might see more clearly what we are up against now.

Socratic cross-examination requires an interlocutor who at least would feel ashamed not to put on a show of accountability. The people Socrates questioned were performing wisdom, but they were performing it because the culture still demanded that leaders seem accountable. They would sit for the examination, because refusing would be disgraceful, like breaking formation in a hoplite phalanx. Their scripts collapsed because the scripts were designed to look like real accountability, and real accountability is what Socrates brought.

There is a useful framework for understanding how public discourse degrades, which distinguishes between guilt, shame, and depravity. A guilty person has violated a norm and intends to repair the breach by owning up and making amends. An ashamed person intends to conceal the violation, which means deflecting investigation. A depraved person has generalized the intent to conceal into a coalitional strategy: I will cover for you if you cover for me, and together we will derail any investigation that threatens either of us.

The leaders Socrates questioned were, at worst, ashamed. They had taken on roles they couldn’t account for, and they wanted to hide that fact, but they still felt the force of the demand for accountability. When Socrates pressed them, they squirmed, they went in circles, they eventually fled. But they engaged. They felt they had to engage. The culture of Athens, even in its degraded state, still held that a man who refused to give an account of his claims was disgraced.

Depravity is a further stage, and Sartre described it precisely in his book Anti-Semite and Jew:

Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.

The depraved person does not perform accountability. He plays with the forms of accountability to exhaust and humiliate the person who still takes them seriously. He is not running a script that is trying to pass as a perspective, collapsing only under the kind of questioning we still call Socratic. He is amusing himself at the expense of the questioner. Cross-examination does not expose him, because he was never trying to seem consistent. He was trying to demonstrate that consistency is for suckers. The Socratic method will not help him.

The Socratic method, if we can rightly call it that, was forged by the pressures confronted by a living mind in a city of the ashamed, people who still cared enough about accountability to fake it. It has nothing to say to the depraved themselves, who have dispensed with the pretense, though in a transitional period might expose them to the judgment of the naïve.

But the quality that preceded the method is something else.

What the oracle recognized in Socrates was not the ability to cross-examine. It was something closer to what it recognized in Euripides: the capacity to be present to what is happening, to see the person in front of you rather than the drama you are supposed to enact with them, to respond to the situation rather than to your script about the situation. To be alive.

We do not need a new method. Methods are what you formalize after you understand the problem, and we are not there yet. What might still help us is the quality that precedes method: the willingness to see what is in front of us, to say the obvious thing that everyone embedded in the performance is too scripted to see, and to keep reaching out to others even when the response is usually not even embarrassment but indifference, not even a failed defense but a smirk.

The oracle didn’t say Socrates had the best method. It said he was the wisest man, in a society oriented against wisdom. The “method” was just how aliveness was memorialized by a city that still cared enough to be ashamed of being dead.

The question for us is what aliveness looks like in a city beyond shame…

Eminently worth reading in full.

The Socratic Method and the importance of recognizing and responding to the times in which we live: “Socrates is Mortal

See also: “The real reason Socrates was given the death sentence– humiliating powerful people was not a key to success

Apposite: “What Separates The Great From The Petty In History” (“embracing the relentless ally of reality makes all the difference”)

* Socrates

###

As we inhabit our moment, we might send thoughtful birthday greetings to David Hume; he was born on this date in 1711. A philosopher, historian, economist, and essayist, he developed a highly-influential system of empiricismphilosophical scepticism, and metaphysical naturalism.

Hume strove to create a naturalistic science of man that examined the psychological basis of human nature. Hume followed John Locke in rejecting the existence of innate ideas, concluding that all human knowledge derives solely from experience; this places him amongst such empiricists as Francis BaconThomas Hobbes, Locke, and George Berkeley.

Hume argued that inductive reasoning and belief in causality cannot be justified empirically; instead, they result from custom and mental habit. People never actually perceive that one event causes another but experience only the “constant conjunction” of events. This problem of induction means that to draw any causal inferences from past experience, it is necessary to presuppose that the future will resemble the past; this metaphysical presupposition cannot itself be grounded in prior experience.

An opponent of philosophical rationalists, Hume held that passions rather than reason govern human behaviour, proclaiming that “Reason is, and ought only to be the slave of the passions.” Hume was also a sentimentalist who held that ethics are based on emotion or sentiment rather than abstract moral principle. He maintained an early commitment to naturalistic explanations of moral phenomena and is usually accepted by historians of European philosophy to have first clearly expounded the is–ought problem, or the idea that a statement of fact alone can never give rise to a normative conclusion of what ought to be done.

Hume denied that people have an actual conception of the self, positing that they experience only a bundle of sensations and that the self is nothing more than this bundle of perceptions connected by an association of ideas. Hume’s compatibilist theory of free will takes causal determinism as fully compatible with human freedom. His philosophy of religion, including his rejection of miracles and critique of the argument from design, was especially controversial. Hume left a legacy that affected utilitarianism, logical positivism, the philosophy of science, early analytic philosophycognitive sciencetheology and many other fields and thinkers. Immanuel Kant credited Hume as the inspiration that had awakened him from his “dogmatic slumbers.”

– source

Apropos the piece featured above, see Peter Kreeft‘s Socrates Meets Hume- The Father of Philosophy Meets
The Father of Modern Skepticism
(“A Socratic Examination of [Hume’s] An Enquiry Concerning Human Understanding“)

Written by (Roughly) Daily

May 7, 2026 at 1:00 am

“I tend to think that most fears about A.I. are best understood as fears about capitalism”*…

Further to Wednesday‘s and yesterday‘s posts (on to other topics again after this, I promise), a powerful piece from Patrick Tanguay (in his always-illuminating Sentiers newsletter).

He begins with a consideration of Peter Wolfendale’s “Geist in the machine

… Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we”… end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open:

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account…

Tanguay then turns to “The Prospect of Butlerian Jihad” by Liam Mullally, in which Mullally uses…

… Herbert’s Dune and the Butlerian Jihad [here] as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. [see here] If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons:

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.

As Max Read (scroll down) observes:

… if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary…

The question isn’t if we want a relationship with technology; it’s what kind of relationship we want. We’ve always (at least since we’ve been a conscious species) co-existed with, and been shaped by, tools; we’ve always suffered the “friction” of technological transition as we innovate new tools. As yesterday’s post suggested (in its defense of the open web in the face on a voracious attack from powerful LLM companies), “what matters is power“… power to shape the relationship(s) we have with the technologies we use. That power is currently in the hands of a relatively few companies, all concerned above all else with harvesting as much money as they can from “uses” they design to amplify that engagement and ease that monetization. It doesn’t, of course, have to be this way.

We’ve lived under modern capitalism for only a few hundred years, and under the hyper-global, hyper-extractive regime we currently inhabit for only a century-and-a-half or so, during which time, in fits and starts, it has grown ever more rapcious. George Monbiot observed that “like coal, capitalism has brought many benefits. But, like coal, it now causes more harm than good.” And Ursula Le Guin, that “we live in capitalism. Its power seems inescapable. So did the divine right of kings.” In many countries, “divine right” monarchy has been replaced by “constitutional monarchy.” Perhaps it’s time for more of the world to consider “constitutional capitalism.” We could start by learning from the successes and failures of Scandinavia and Europe.

Social media, AI, quantum computing– on being clear as to the real issue: “Geist in the machine & The prospect of Butlerian Jihad,” from @inevernu.bsky.social.

Apposite: “The enclosure of the commons inaugurates a new ecological order. Enclosure did not just physically transfer the control over grasslands from the peasants to the lord. It marked a radical change in the attitudes of society toward the environment.”

(All this said, David Chalmers argues that there’s one possibility that might change everything: “Could a Large Language Model be Conscious?” On the other hand, the ARC Prize Foundation suggests, we have some time: a test they devised for benchmarking agentic intelligence recently found that “humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%”… :)

Ted Chiang (gift article; see also here and here and here)

###

As we keep our eyes on the prize, we might spare a thought for a man who wrestled with a version of these same issues in the last century, Pierre Teilhard de Chardin; he died on this date in 1955.  A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere.  Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory.  His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.

source

“Show, don’t tell”*…

Illustration depicting two stick figures, one in red and the other in blue, both saying 'hi' amidst explosive lines, with the title 'The Ozma Problem' displayed above.

Some things are very difficult to explain using words alone; they require physical demonstration. Consider, for example, the distinction between right and left. It turns out that this difficulty has been at the heart of the great scientific debates about the nature of space…

… explain right and left to a friend using language alone and without using the words right and left. As you can only use language, you can’t show your hands or use pictures!

It’s tricky, isn’t it? The difference between right and left isn’t as straightforward as it seems. If we dig a little deeper, we will find that the science behind right and left is surprising, complex, and profound.

How can two things be identical yet different at the same time? This was the question that puzzled one of humankind’s greatest thinkers, Immanuel Kant.

Many of the great debates of the Scientific Revolution during the 16th and 17th centuries concerned the nature of space. The English polymath Sir Isaac Newton proposed that space was absolute: space is an entity in itself and exists even without objects, matter, or living beings filling it. 

In contrast, Gottfried Leibniz, Newton’s bitter rival, argued that space was relational: it only existed because of the relations between the objects that fill it. If objects do not exist, then space doesn’t either.

Meanwhile, Immanuel Kant used handedness to give his two cents. He asked us to imagine a solitary hand floating in an otherwise completely empty space. The hand must either be a right hand or a left hand, and this will be the case even in a space where no relationships between objects can be observed. Kant noted that our hands are geometrically and mathematically identical in every way possible, whether it be the lengths of the fingers or the angles between them. Yet, the one fundamental difference between them—that one is a right hand, and the other is a left hand—exists in itself; it is intrinsic to the hand and not related to any other object, similar to space itself. Space has an absolute property.

Ultimately, Kant’s theories of handedness were not foolproof and could not be used to prove that space is absolute. Indeed, Kant would switch between the Newtonian and Leibnizian schools of thought during his lifetime. However, Kant did show just how puzzling and difficult it is to explain why right hands and left hands are identical but different. That intrinsic quality of handedness is almost impossible to explain without showing, and this is the root of the Ozma Problem.

In 1960, Project Ozma was launched in West Virginia. Named after the ruler of the fictional Land of Oz, Project Ozma was a huge telescope that listened for signals from space, signals that could be proof of extraterrestrial intelligence. Unfortunately, the project only ran for a few months, and it had no major success.

Let’s say the telescope had picked up these signals. How would we on Earth respond? We would need to convert their signals, after which we would send our own. Telescopes and computers use binary code. And directionality is crucial to understanding binary, as it is read left to right and decoded right to left. So, if we are sending binary signals to aliens, we need to be sure they understand which direction is left and which is right. How can we be sure they share our understanding of directions?

This is the Ozma Problem, a thought experiment first described by Martin Gardner [see the almanac entry here] in his 1964 book, The Ambidextrous Universe. In this book, Gardner pitched a number of solutions.

Before going into Gardner’s work, here’s a seemingly simple solution: lay your palms face down on a table and equally spaced from your body. The thumb that’s closer to your heart? That’s the left side. The right side is defined by the thumb farther away from the heart.

Another potential solution would be to use north and south as reference points: when facing north, everything towards east is the right side, and everything pointing west is the left side.

The problem with these solutions is that they both rely on a shared point of reference, like the direction of north-south-east-west and the location of the heart. In no way can we be certain that an alien species would share these!

Some of the solutions that Gardner proposed in his book use magnetic fields, planetary rotation, and the direction of current flow. And as we discussed before, they all fail because of the need for a shared point of reference. 

So, after centuries of wondering whether we are alone in the universe, we finally make contact with an alien species, only to find that our inability to explain something as mundane as right and left precludes meaningful dialogue. The Ozma Problem demonstrates the limits of our language, and it challenges anthropocentrism, which is the notion that human beings and our experiences are the center of the universe.

Many thought problems are hypothetical and can’t be solved, but the Ozma Problem does have a solution. In fact, the solution already existed when Gardner first described it. But it’s not immediately associated with right-left asymmetry or aliens.

 

While we cannot be sure that aliens share our anatomy or our perception of north-south-east-west, if they inhabit the same universe as us, we can assume the fundamental forces of physics apply to them too.

There are four fundamental forces of physics: gravity, electromagnetism, strong nuclear forces (the force that binds atomic nuclei together), and weak nuclear forces (the force that causes atomic decay).

Up until 1956, it was assumed these fundamental forces all display parity. Parity is an important concept in physics, and it can be demonstrated visually by using a mirror. If we stand in front of a mirror holding an apple in our right hand and then drop it, the reflection will show it falling to the ground, but the apple will fall from your left hand. Gravity still works in the reflection. Likewise, if we look at the strong forces binding atomic nuclei and then observe them in a mirror, the images would be identical, just with right and left switched. 

But in 1956, Professor Chien-Shiung Wu, a physicist, conducted a ground breaking experiment. She was able to prove that the weak nuclear force—the decay of atoms—did not always demonstrate parity. The weak nuclear force does not adhere to mirror symmetry. 

Professor Wu showed this by observing the decay of cobalt-60 atoms. When atoms decay, they spin out electrons. Up until then, scientists had always observed these electrons spinning out equally in all directions. But Professor Wu saw that cobalt-60 will always preferentially spin out electrons in a certain direction. In other words, the movement is asymmetric. For some reason, the decay of atoms is the one fundamental force that does not adhere to parity or mirror symmetry, thus showing that directionality is intrinsic to the universe, just as Kant had postulated in the 18th century. 

For the first time in history, it was proven that nature can prefer one direction. Very soon after Wu’s findings, physicists were able to prove that elementary particles known as neutrinos always spin towards the left.

What does this mean for our communication with aliens? If the aliens can replicate Professor Wu’s experiment and visualize the spin of electrons while cobalt-60 decays, they can orient right and left!

Ironically, Professor Wu was not afforded any sort of parity herself during her working life. Other scientists were recognized for research that could not have been achieved without hers. Today, the weak force remains one of the most important and mysterious topics in physics today, thanks to Professor Wu.

So, if the only way to scientifically and definitively define the difference between right and left is to build a particle accelerator and observe the decay of cobalt-60, clearly the difference is not as straightforward as it may first seem! The Ozma Problem is proof that the most mundane concepts are sometimes directly linked to the cosmos and speak to the nature of existence itself…

An essay by Dr. Maloy Das (see the bio in this unrelated– but also fascinating– article by him). From the remarkable blog, Fascinating World, scored a highly credible source by the MBFC for having proper sourcing, no failed fact-checks, and “highly factual” reporting. It’s the work of Krishna Rathuryan, currently a senior at a prep school in Princeton (where he’s also apparently a pretty accomplished distance runner) and team of his friends.

When language fails: “What Is The Ozma Problem, And Why Does It Matter?

* attributed to playwright Anton Chekhov, who said said “Don’t tell me the moon is shining; show me the glint of light on broken glass.” It has, of course, become a motto for many writers across genre.

###

As we explore explanation, we (especially any readers in or near Manhattan Beach, California) might note that today is one of the two days of the year (symmetrically on either side of the winter solstice, 37 days before and 37 after) when the public sculpture there, “Light Gate,” becomes a portal “unlocked” by the rays of the setting sun… as Atlas Obscura puts it, “a bit of Druidic paganism by way of high modern design.”

A colorful public sculpture featuring a circular frame with vibrant glass panels reflecting the sunset.

source

“We live, in fact, in a world starved for solitude, silence, and private: and therefore starved for meditation and true friendship”*…

… if then, even more so now. Ben Tarnoff takes off from Lowry Pressly‘s new book to ponder why privacy matters and why we have such trouble even thinking about how to protect it…

… Today, it is harder to keep one’s mind in place. Our thoughts leak through the sieve of our smartphones, where they join the great river of everyone else’s. The consequences, for both our personal and collective lives, are much discussed: How can we safeguard our privacy against state and corporate surveillance? Is Instagram making teen-agers depressed? Is our attention span shrinking?

There is no doubt that an omnipresent Internet connection, and the attendant computerization of everything, is inducing profound changes. Yet the conversation that has sprung up around these changes can sometimes feel a little predictable. The same themes and phrases tend to reappear. As the Internet and the companies that control it have become an object of permanent public concern, the concerns themselves have calcified into clichés. There is an algorithmic quality to our grievances with algorithmic life.

Lowry Pressly’s new book, “The Right to Oblivion: Privacy and the Good Life,” defies this pattern. It is a radiantly original contribution to a conversation gravely in need of new thinking. Pressly, who teaches political science at Stanford, takes up familiar fixations of tech discourse—privacy, mental health, civic strife—but puts them into such a new and surprising arrangement that they are nearly unrecognizable. The effect is like walking through your home town after a tornado: you recognize the buildings, but after some vigorous jumbling they have acquired a very different shape.

Pressly trained as a philosopher, and he has a philosopher’s fondness for sniffing out unspoken assumptions. He finds one that he considers fundamental to our networked era: “the idea that information has a natural existence in human affairs, and that there are no aspects of human life which cannot be translated somehow into data.” This belief, which he calls the “ideology of information,” has an obvious instrumental value to companies whose business models depend on the mass production of data, and to government agencies whose machinery of monitoring and repression rely on the same.

But Pressly also sees the ideology of information lurking in a less likely place—among privacy advocates trying to defend us from digital intrusions. This is because the standard view of privacy assumes there is “some information that already exists,” and what matters is keeping it out of the wrong hands. Such an assumption, for Pressly, is fatal. It “misses privacy’s true value and unwittingly aids the forces it takes itself to be resisting,” he writes. To be clear, Pressly is not opposed to reforms that would give us more power over our data—but it is a mistake “to think that this is what privacy is for.” “Privacy is valuable not because it empowers us to exercise control over our information,” he argues, “but because it protects against the creation of such information in the first place.”

If this idea sounds intriguing but exotic, you may be surprised to learn how common it once was. “A sense that privacy is fundamentally opposed to information has animated public moral discourse on the subject since the very beginning,” Pressly writes…

[Tarnoff recaps Pressly’s a brief history of the technologies that changed our relationship to information, from Kodak through CCTV, to AI…]

… The reason that Pressly feels so strongly about imposing limits on datafication is not only because of the many ways that data can be used to damage us. It is also because, in his view, we lose something precious when we become information, regardless of how it is used. In the very moment when data are made, Pressly believes, a line is crossed. “Oblivion” is his word for what lies on the other side.

Oblivion is a realm of ambiguity and potential. It is fluid, formless, and opaque. A secret is an unknown that can become known. Oblivion, by contrast, is unknowable: it holds those varieties of human experience which are “essentially resistant to articulation and discovery.” It is also a place beyond “deliberate, rational control,” where we lose ourselves or, as Pressly puts it, “come apart.” Sex and sleep are two of the examples he provides. Both bring us into the “unaccountable regions of the self,” those depths at which our ego dissolves and about which it is difficult to speak in definite terms. Physical intimacy is hard to render in words—“The experience is deflated by description,” Pressly observes—and the same is notoriously true of the dreams we have while sleeping, which we struggle to narrate, or even to remember, on waking.

Oblivion is fragile, however. When it comes into contact with information, it disappears. This is why we need privacy: it is the protective barrier that keeps oblivion safe from information. Such protection insures that “one can actually enter into oblivion from time to time, and that it will form a reliably available part of the structure of one’s society.”

But why do we need to enter into oblivion from time to time, and what good does it do us? Pressly gives a long list of answers, drawn not only from the Victorians but also from the work of Michel Foucault, Roland Barthes, Gay Talese, Jorge Luis Borges, and Hannah Arendt. One is that oblivion is restorative: we come apart in order to come back together. (Sleep is a case in point; without a nightly suspension of our rational faculties, we go nuts.) Another is the notion that oblivion is integral to the possibility of personal evolution. “The main interest in life and work is to become someone else that you were not in the beginning,” Foucault writes. To do so, however, you must believe that the future can be different from the past—a belief that becomes harder to sustain when one is besieged by information, as the obsessive documentation of life makes it “more fixed, more factual, with less ambiguity and life-giving potentiality.” Oblivion, by setting aside a space for forgetting, offers a refuge from this “excess of memory,” and thus a standpoint from which to imagine alternative futures.

Oblivion is also essential for human dignity. Because we cannot be fully known, we cannot be fully instrumentalized. Immanuel Kant urged us to treat others as ends in themselves, not merely as means. For Pressly, our obscurities are precisely what endow us with a sense of value that exceeds our usefulness. This, in turn, helps assure us that life is worth living, and that our fellow human beings are worthy of our trust. “There can be no trust of any sort without some limits to knowledge,” Pressly writes…

… Psychoanalysis first emerged in the late nineteenth century, in parallel with the idea of privacy. This was a period when the boundary between public and private was being redrawn, not only with the onslaught of handheld cameras but also, more broadly, because of the dislocating forces of what historians call the Second Industrial Revolution. Urbanization pulled workers from the countryside and packed them into cities, while mass production meant they could buy (rather than make) most of what they needed. These developments weakened the institution of the family, which lost its primacy as people fled rural kin networks and the production of life’s necessities moved from the household to the factory.

In response, a new freedom appeared. For the first time, the historian Eli Zaretsky observes, “personal identity became a problem and a project for individuals.” If you didn’t have your family to tell you who you were, you had to figure it out yourself. Psychoanalysis helped the moderns to make sense of this question, and to try to arrive at an answer.

More than a century later, the situation looks different. If an earlier stage of capitalism laid the material foundations for a new experience of individuality, the present stage seems to be producing the opposite. In their taverns, theatres, and dance halls, the city dwellers of the Second Industrial Revolution created a culture of social and sexual experimentation. Today’s young people are lonely and sexless. At least part of the reason is the permanent connectivity that, as Pressly argues, conveys the feeling that “one’s time and attention—that is to say, one’s life—are not entirely one’s own.”

The modernist city promised anonymity, reinvention. The Internet is devoid of such pleasures. It is more like a village: a place where your identity is fixed. Online, we are the sum of what we have searched, clicked, liked, and bought. But there are futures beyond those predicted through statistical extrapolations from the present. In fact, the past is filled with the arrival of such futures: those blind corners when no amount of information could tell you what was coming. History has a habit of humbling its participants. Somewhere in its strange rhythms sits the lifelong work of making a life of one’s own…

We often want to keep some information to ourselves. But information itself may be the problem: “What Is Privacy For?” from @bentarnoff in @NewYorker. (Possible paywall; archived link here.)

Pair with the two (marvelous, provocative) documentary series from Adam Curtis and the BBC: The Century of Self and Hypernormalization, both of which are available on You Tube.)

* C. S. Lewis

###

As we make room, we might send painfully-observant birthday greetings to Lenny Bruce; he was born on this date in 1925. A comedian, social critic, and satirist, he was ranked (in a 2017 Roling Stone poll) the third best stand-up comic of all time– behind Richard Pryor and George Carlin, both of whom credit Bruce as an influence.

source

Written by (Roughly) Daily

October 13, 2024 at 1:00 am

“Few people have the imagination for reality”*…

Experiments that test physics and philosophy as “a single whole,” Amanda Gefter suggests, may be our only route to surefire knowledge about the universe…

Metaphysics is the branch of philosophy that deals in the deep scaffolding of the world: the nature of space, time, causation and existence, the foundations of reality itself. It’s generally considered untestable, since metaphysical assumptions underlie all our efforts to conduct tests and interpret results. Those assumptions usually go unspoken.

Most of the time, that’s fine. Intuitions we have about the way the world works rarely conflict with our everyday experience. At speeds far slower than the speed of light or at scales far larger than the quantum one, we can, for instance, assume that objects have definite features independent of our measurements, that we all share a universal space and time, that a fact for one of us is a fact for all. As long as our philosophy works, it lurks undetected in the background, leading us to mistakenly believe that science is something separable from metaphysics.

But at the uncharted edges of experience — at high speeds and tiny scales — those intuitions cease to serve us, making it impossible for us to do science without confronting our philosophical assumptions head-on. Suddenly we find ourselves in a place where science and philosophy can no longer be neatly distinguished. A place, according to the physicist Eric Cavalcanti, called “experimental metaphysics.”

Cavalcanti is carrying the torch of a tradition that stretches back through a long line of rebellious thinkers who have resisted the usual dividing lines between physics and philosophy. In experimental metaphysics, the tools of science can be used to test our philosophical worldviews, which in turn can be used to better understand science. Cavalcanti, a 46-year-old native of Brazil who is a professor at Griffith University in Brisbane, Australia, and his colleagues have published the strongest result attained in experimental metaphysics yet, a theorem that places strict and surprising constraints on the nature of reality. They’re now designing clever, if controversial, experiments to test our assumptions not only about physics, but about the mind.

While we might expect the injection of philosophy into science to result in something less scientific, in fact, says Cavalcanti, the opposite is true. “In some sense, the knowledge that we obtain through experimental metaphysics is more secure and more scientific,” he said, because it vets not only our scientific hypotheses but the premises that usually lie hidden beneath…

Gefter traces the history of this integrative train of thought (Kant, Duhem, Poincaré, Popper, Einstein, Bell), its potential for helping understand quantum theory… and the prospect of harnessing AI to run the necessary experiments– seemingly comlex and intensive beyond the scope of currenT experimental techniques…

Cavalcanti… is holding out hope. We may never be able to run the experiment on a human, he says, but why not an artificial intelligence algorithm? In his newest work, along with the physicist Howard Wiseman and the mathematician Eleanor Rieffel, he argues that the friend could be an AI algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab. “At some point,” Cavalcanti contends, “we’ll have artificial intelligence that will be essentially indistinguishable from humans as far as cognitive abilities are concerned,” and we’ll be able to test his inequality once and for all.

But that’s not an uncontroversial assumption. Some philosophers of mind believe in the possibility of strong AI, but certainly not all. Thinkers in what’s known as embodied cognition, for instance, argue against the notion of a disembodied mind, while the enactive approach to cognition grants minds only to living creatures.

All of which leaves physics in an awkward position. We can’t know whether nature violates Cavalcanti’s [theorem] — we can’t know, that is, whether objectivity itself is on the metaphysical chopping block — until we can define what counts as an observer, and figuring that out involves physics, cognitive science and philosophy. The radical space of experimental metaphysics expands to entwine all three of them. To paraphrase Gonseth, perhaps they form a single whole…

‘Metaphysical Experiments’ Probe Our Hidden Assumptions About Reality,” in @QuantaMagazine.

* Johann Wolfgang von Goethe

###

As we examine edges, we might send thoughtful birthday greetings to Rudolf Schottlaender; he was born on this date in 1900. A philosopher who studied with Edmund HusserlMartin HeideggerNicolai Hartmann, and Karl Jaspers, Schottlaender survived the Nazi regime and the persecution of the Jews, hiding in Berlin. After the war, as his democratic and humanist proclivities kept him from posts in philosophy faculties, he distinguished himself as a classical philologist and translator (e.g., new translations of Sophocles which were very effective on the stage, and an edition of Petrarch).

But he continued to publish philosophical and political essays and articles, which he predominantly published in the West and in which he saw himself as a mediator between the systems. Because of his positions critical to East Germany, he was put under close surveillance by the Ministry for State Security (Ministerium für Staatssicherheit or Stasi)– and inspired leading minds of the developing opposition in East Germany.

source