“I cannot teach anybody anything. I can only make them think.”*…

Benjamin Ross Hoffman puts “the Socratic Method” into context– important, timely context…
There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency. [or impiety– see here] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk.
Euthyphro’s confidence is striking. His own family thinks it is indecent for a son to prosecute his father; Euthyphro insists that true decency demands it, that he understands what the gods require better than his relatives do. Socrates, who is about to be tried for teaching indecency toward the gods, asks Euthyphro to explain what decency actually is, since Euthyphro claims to know, and Socrates will need such knowledge for his own defense.
Euthyphro’s first answer is: decency is what I am doing right now, prosecuting wrongdoers regardless of kinship. Socrates points out that this is an example, not a definition. There are many decent acts; what makes them all decent?
Euthyphro tries again: decency is what the gods love. But the gods disagree among themselves, Socrates observes, so by this definition the same act could be both decent and indecent. Euthyphro refines: decency is what all the gods love. And here Socrates asks a question Euthyphro cannot answer: do the gods love decent things because they are decent, or are things decent because the gods love them?
If decent things are decent because the gods love them, then decency is arbitrary, a matter of divine whim. Socrates is too polite to say so, but the implication is: if decency is defined by the arbitrary whim of our betters, who are you to prosecute your father?
If the gods love decent things because they are decent, then however we know this, we already know the standard for decency ourselves and can cut out the middleman. But then Euthyphro should be able to explain the standard. He can’t.
Euthyphro tries a few more times, suggesting that decency is a kind of service to the gods, a kind of trade with the gods. Each time Socrates gently follows the definition to its consequences, and each time it collapses. Eventually Euthyphro leaves, saying he is in a hurry. Socrates’ last words are a lament: you have abandoned me without the understanding I needed for my own defense.
This is usually read as a proto-academic dialogue about definitions. It is a scene from a civilization in crisis. A man is about to use the legal system to destroy his own father on the basis of a concept he cannot define, in a courthouse where another man is about to be destroyed by the same concept. And the man who cannot define it is not unusual. He is representative.
The indecency for which Socrates was being prosecuted seems to have consisted of asking just the sort of questions Socrates posed to Euthyphro…
[Hoffman sketches the culture and politics of Athens in the late fifth century, the role of the Sophists, and the (radical) role that Socrates played…]
… Plato also responded to his beloved mentor’s death by founding the Academy, a great house in Athens where philosophical reasoning was taught methodically. We still have our Academics.
Agnes Callard, in her recent book Open Socrates, wants Socrates to be timeless. She strips out the historical situation, strips out the aliveness that preceded the method, and ends up defending a method that’s obviously inapplicable in many of the cases where she claims it applies. Aristarchus did not need his assumptions questioned at random. He needed someone who could ask probing questions about his actual problem, from a perspective that didn’t share his assumptions about what was and wasn’t possible.
Zvi Mowshowitz, in his review of Callard’s book (part 1, part 2), argues at considerable length that the decontextualized version is bad. He is right. Cached beliefs are usually fine. Destabilizing them is usually harmful. Most people do not want to spend their lives in Socratic questioning, and they are right.
But Zvi has written a long polemic in two installments on the winning side of an incredibly lame debate about whether we should anxiously doubt ourselves all the time, responding to Callard’s decontextualized Socrates, not the real one. The real one did not devise a method and then apply it. He had a quality, something the oracle reached for the language of the tragedians to describe. And what was memorialized as a “method” was what happened when that quality met a city where every other participant in public life had stopped being alive.
Socrates invokes timeless considerations like logical coherence, and committing (even provisionally) to specific claims; these are very natural things to try to appeal to when people are being squirmy, dramatic, hard to pin down, and fleeing to abstractions that resist falsification.
Spinoza, in the Theologico-Political Treatise, similarly resituated the teachings of Jesus of Nazareth in their proper context. The political teachings of the Gospels to turn the other cheek, forgive debts, and render unto Caesar what is due to him, are instructions for people living under a hostile and extractive system of domination. Citizens of a free republic have entirely different duties. They have an affirmative obligation to hold each other accountable, to sue people who have wronged them, to participate in collective self-governance. The teachings are not wrong. They are addressed to a specific situation, and become wrong when mechanically transplanted into an inappropriate context.
The reason to recover the historical Socrates is not only accuracy about the distant past; it is that by seeing this relevant aspect of the past more clearly, we might see more clearly what we are up against now.
Socratic cross-examination requires an interlocutor who at least would feel ashamed not to put on a show of accountability. The people Socrates questioned were performing wisdom, but they were performing it because the culture still demanded that leaders seem accountable. They would sit for the examination, because refusing would be disgraceful, like breaking formation in a hoplite phalanx. Their scripts collapsed because the scripts were designed to look like real accountability, and real accountability is what Socrates brought.
There is a useful framework for understanding how public discourse degrades, which distinguishes between guilt, shame, and depravity. A guilty person has violated a norm and intends to repair the breach by owning up and making amends. An ashamed person intends to conceal the violation, which means deflecting investigation. A depraved person has generalized the intent to conceal into a coalitional strategy: I will cover for you if you cover for me, and together we will derail any investigation that threatens either of us.
The leaders Socrates questioned were, at worst, ashamed. They had taken on roles they couldn’t account for, and they wanted to hide that fact, but they still felt the force of the demand for accountability. When Socrates pressed them, they squirmed, they went in circles, they eventually fled. But they engaged. They felt they had to engage. The culture of Athens, even in its degraded state, still held that a man who refused to give an account of his claims was disgraced.
Depravity is a further stage, and Sartre described it precisely in his book Anti-Semite and Jew:
Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.
The depraved person does not perform accountability. He plays with the forms of accountability to exhaust and humiliate the person who still takes them seriously. He is not running a script that is trying to pass as a perspective, collapsing only under the kind of questioning we still call Socratic. He is amusing himself at the expense of the questioner. Cross-examination does not expose him, because he was never trying to seem consistent. He was trying to demonstrate that consistency is for suckers. The Socratic method will not help him.
The Socratic method, if we can rightly call it that, was forged by the pressures confronted by a living mind in a city of the ashamed, people who still cared enough about accountability to fake it. It has nothing to say to the depraved themselves, who have dispensed with the pretense, though in a transitional period might expose them to the judgment of the naïve.
But the quality that preceded the method is something else.
What the oracle recognized in Socrates was not the ability to cross-examine. It was something closer to what it recognized in Euripides: the capacity to be present to what is happening, to see the person in front of you rather than the drama you are supposed to enact with them, to respond to the situation rather than to your script about the situation. To be alive.
We do not need a new method. Methods are what you formalize after you understand the problem, and we are not there yet. What might still help us is the quality that precedes method: the willingness to see what is in front of us, to say the obvious thing that everyone embedded in the performance is too scripted to see, and to keep reaching out to others even when the response is usually not even embarrassment but indifference, not even a failed defense but a smirk.
The oracle didn’t say Socrates had the best method. It said he was the wisest man, in a society oriented against wisdom. The “method” was just how aliveness was memorialized by a city that still cared enough to be ashamed of being dead.
The question for us is what aliveness looks like in a city beyond shame…
Eminently worth reading in full.
The Socratic Method and the importance of recognizing and responding to the times in which we live: “Socrates is Mortal“
See also: “The real reason Socrates was given the death sentence– humiliating powerful people was not a key to success“
Apposite: “What Separates The Great From The Petty In History” (“embracing the relentless ally of reality makes all the difference”)
* Socrates
###
As we inhabit our moment, we might send thoughtful birthday greetings to David Hume; he was born on this date in 1711. A philosopher, historian, economist, and essayist, he developed a highly-influential system of empiricism, philosophical scepticism, and metaphysical naturalism.
Hume strove to create a naturalistic science of man that examined the psychological basis of human nature. Hume followed John Locke in rejecting the existence of innate ideas, concluding that all human knowledge derives solely from experience; this places him amongst such empiricists as Francis Bacon, Thomas Hobbes, Locke, and George Berkeley.
Hume argued that inductive reasoning and belief in causality cannot be justified empirically; instead, they result from custom and mental habit. People never actually perceive that one event causes another but experience only the “constant conjunction” of events. This problem of induction means that to draw any causal inferences from past experience, it is necessary to presuppose that the future will resemble the past; this metaphysical presupposition cannot itself be grounded in prior experience.
An opponent of philosophical rationalists, Hume held that passions rather than reason govern human behaviour, proclaiming that “Reason is, and ought only to be the slave of the passions.” Hume was also a sentimentalist who held that ethics are based on emotion or sentiment rather than abstract moral principle. He maintained an early commitment to naturalistic explanations of moral phenomena and is usually accepted by historians of European philosophy to have first clearly expounded the is–ought problem, or the idea that a statement of fact alone can never give rise to a normative conclusion of what ought to be done.
Hume denied that people have an actual conception of the self, positing that they experience only a bundle of sensations and that the self is nothing more than this bundle of perceptions connected by an association of ideas. Hume’s compatibilist theory of free will takes causal determinism as fully compatible with human freedom. His philosophy of religion, including his rejection of miracles and critique of the argument from design, was especially controversial. Hume left a legacy that affected utilitarianism, logical positivism, the philosophy of science, early analytic philosophy, cognitive science, theology and many other fields and thinkers. Immanuel Kant credited Hume as the inspiration that had awakened him from his “dogmatic slumbers.”
– source
Apropos the piece featured above, see Peter Kreeft‘s Socrates Meets Hume- The Father of Philosophy Meets
The Father of Modern Skepticism (“A Socratic Examination of [Hume’s] An Enquiry Concerning Human Understanding“)

“The web of our life is of a mingled yarn”*…
In what does our personhood consist? From what/where does it come? João de Pina Cabral unpacks the seminal thinking of Lucien Lévy-Bruhl and the advances in cognitive science and developmental psychology that suggest that a person is not self-contained, but the outcome of a lifelong process of living with others…
It matters to understand what constitutes a person. After all, if there is one feature that distinguishes human society from other forms of sociality, it is that, at around one year of age, most human beings attain personhood: they learn to speak a language, develop object permanence – the understanding that things do not disappear when out of sight – and relate to others in consciously moral ways. Should all persons be accorded the same rights and duties by virtue of this condition? These are weighty questions that have occupied social scientists and philosophers since antiquity – particularly at moments such as the present, when war and imperial oppression once again raise their ugly heads.
Nevertheless, this question cannot be approached as a purely moral matter, for in order to determine what rights and duties may be attributed to persons, it is necessary to establish what persons are. This longstanding perplexity can now be addressed in increasingly sophisticated ways, following a century of sustained anthropological enquiry.
In September 1926, two of the most eminent anthropologists of the day met in person for the first time in New York. Both were Jewish and born in Europe, but one – Franz Boas – had become an American citizen and was a leading figure at Columbia University in New York, while the other – Lucien Lévy-Bruhl – was a professor in Paris. Both were highly learned, humanistically inclined and politically liberal; they respected one another, yet they did not seem to agree about the matter of the person.
Lévy-Bruhl had begun his career as a philosopher of ethics. His doctoral thesis focused on the legal concept of responsibility. He was struck by the fact that responsibility first arose between persons not as a law, but as an emotion – a deep-seated feeling. He argued that co-responsibility implies a bond between persons grounded less in reason than in the conditions of their emergence as persons. As children, individuals do not emerge out of nothing, but through deep engagement with prior persons – their caregivers. Thus, moral responsibility could not have arisen from adherence to norms or rules; rather, norms and rules emerged from the sense of responsibility that humans acquire as they become persons.
This led him to question how we become thinking beings. Do all humans, after all, think in the same way? He began reading the increasingly sophisticated ethnographic accounts emerging from Australia, Africa, Asia and South America, and was deeply influenced by an extended trip to China. He was an empirical realist, but also a personalist – that is, he accorded primacy to the person as such, refusing to subsume the individual into the group. In this respect, he was not persuaded by the arguments of the great sociologist Émile Durkheim concerning the exceptional status of the ‘sacred’ or the special powers of ‘collective consciousness’. Lévy-Bruhl soon arrived at a striking conclusion: in their everyday practices and especially in their ritual actions, the so-called ‘primitive’ peoples studied by ethnographers did not appear to conform to the norms of logic that had been regarded as universally valid since the time of Aristotle.
As a friend of his put it, Lévy-Bruhl discovered that such peoples are characterised by ‘a mystical mentality – full of the “supernatural in nature” and prelogic, of a different kind than ours’. Indeed, the basic principles of Aristotelian logic that continue to guide scientific thinking – underpinning modern technological development – seemed to be ignored by premodern peoples. Aristotle’s law of the excluded middle (p or not-p) did not appear to apply to their ‘mystical’ modes of thought, both because they tended to think in terms of concrete objects rather than abstractions, and because they exhibited what Lévy-Bruhl termed ‘participation’…
[de Pina Cabral traces the development of Lévy-Bruhl’s thought, starting with Plato’s concept of methexis; elaborates on Lévy-Bruhl’s ideas; and traces te advances in cognitive science and developmental psychology that support them…]
… the very experience of personhood – that is, the sense that I am myself – is not ‘individual’, since its emergence presupposes a prior condition of being-with others. The self arises from a sharing of being with others, from having been part of those who are close to us. One does not emerge as an addition to society, but rather as a partial separation from the participations that initially constituted one’s being.
As I become a person, I learn to relate to myself as an other; I transcend my immediate position in the world. Without this, I would not be able to speak a language, since the use of pronouns presupposes reflexive thought. Thus, as Lévy-Bruhl had already insisted in his notebooks, participation precedes the person. Intersubjectivity is not the meeting of already constituted subjects, but the ground from which subjectivity emerges. Participation, therefore, may be understood as the constitutive tension between the singular and the plural in the formation of the person in the world. In 1935, the great phenomenologist Edmund Husserl expressed this insight clearly in a letter to Lévy-Bruhl where he thanked him for his ideas on participation:
Saying ‘I’ and ‘we’, [persons] find themselves as members of families, associations, [socialities], as living ‘together’, exerting an influence on and suffering from their world – the world that has sense and reality for them, through their intentional life, their experiencing, thinking, [and] valuing.
In acting and being acted upon together in human company during the first year of life, children become ‘we’ at the same time as they become ‘I’, which means that persons are always, ambivalently, both ‘I’ and ‘we’. Participation and transcendence will remain sources of theoretical perplexity for as long as the ‘we’ is approached as a categorical matter – a question of ‘identity’ – rather than as the presence and activity of living persons in dynamic interaction with the world and with one another.
By contrast, once we accept that personhood is the outcome of a process – the encounter between the embodied capacities of human beings and the historically constituted world that surrounds them – participation loses its mystery. As Lévy-Bruhl put it in one of his final notes: ‘The impossibility for the individual to separate within himself what would be properly him and what he participates in in order to exist …’ Participation, therefore, is the ground upon which everyday social interaction is constituted. The ‘mystical’ (or transcendental) potential within each of us – that which animates the symbolic life of groups – is part of the very process through which each of us becomes ourselves…
How does one become a person? “We” before “I”: “To be is to participate,” from @aeon.co.
A (if not the) next question: how does personhood emerge when the formative interactions are increasingly mediated/attentuated by technology?
* Shakespeare, All’s Well That Ends Well, Act 4, Scene 3
###
As we get together, we might send behaviorist birthday greetings to a man whose work focused on how one might train the “persons” who emerge: Kenneth Spence; he was born on this date in 1907. A psychologist, he worked to construct a comprehensive theory of behavior to encompass conditioning and other simple forms of learning and behavior modification.
Spence attempted to establish a precise, mathematical formulation to describe the acquisition of learned behavior, trying to measure simple learned behaviors (e.g., salivating in anticipation of eating). Much of his research focused on classically conditioned, easily measured, eye-blinking behavior in relation to anxiety and other factors.
One of the leading theorists of his time, Spence was the most cited psychologist in the 14 most influential psychology journals in the last six years of his life (1962 – 1967). A Review of General Psychology survey, published in 2002, ranked Spence as the 62nd most cited psychologist of the 20th century.
“Always look on the bright side of life”*…
The estimable economic historian Louis Hyman has been engaged in an on-going “friendly debate” with his equally-estimable friend and Johns Hopkins colleague Rama Chellappa on “what AI means”…
… As I see this debate, this question of our age, there are two main questions that history can shed some light on.
- Is AI a complement or a substitute for labor? That is, will it increase demand for and the productivity of workers, or decrease it?
- Will AI be controlled by the few or be accessible to the many?
A Complement or a Substitute?
Consider a some of the most important technologies of the past 200 years.
When I am asked about what automation might look like, I inevitably discuss agriculture. Roughly all of our ancestors were farmers and approximately none of us today are. Yet we still eat bread made from wheat. That shift is possible because of automation.
The mechanical thresher, used to process wheat, was a substitute for the most backbreaking work of the harvest. But it also enabled more land to be cultivated, and that land was cultivated more efficiently, allowing for greater harvests. Mechanization of the farm, like the thresher, turned the American Midwest into the breadbasket of the world.
Those displaced farmers found work on railroads, moving all that. And those jobs, according to people at the time, were a kind of liberation from the raw animal labor of threshing. On net, it created demand for more workers at better wages in work more fit for people than beasts. For those that remained farmers, they found other higher-value work to be done. On a farm, there is always more work to do.
The failure, then and now, is to think farmers were only threshers. That was one part of their jobs. Today, our work, for most people, is also a bundle of tasks. Workers then and now could and can focus on parts of their job that are of higher value. And in a new economy, new tasks in new industries will be created. Many of the jobs that we do today (web designer, UI expert) were simply unimaginable in 1850. That is a good thing.
Consider now the assembly line. I’m sure you all know about the staggering increases in productivity that come from the division of labor. If you take my class in industrial history, you would learn deeply about the story of the automobile. With the assembly line, and no other change in technology, car assembly went from 12 and a half hours to about 30 minutes (once they worked out the kinks). Did this reduce the demand for workers? No. It reduced the price of cars. And that increased the demand for workers, who eventually could demand even higher wages through unionization.
It is important here to realize that better tools don’t make us get paid worse. They generally make us get paid more. Why? Because the tool, without the person, is useless. Even for today’s most cutting-edge AIs, that is true. It can code, but it can only code what I imagine it to code. It can draw, but only what I imagine it to draw. That is true for AIs as it was true for the thresher.
So, I would offer that AI will create more growth, more abundance. In the long run, all growth comes from higher productivity.
I would add one more piece to this story. Economic inequality has worsened since roughly 1970. It has worsened, therefore, not in the industrial era, but the digital era. I have argued elsewhere that this happened because for decades we did not use computers as tools of automation but as glorified typewriters (and then as televisions). Our productivity did not increase, especially to justify the expense of computers. Economists have debated for decades now over the lack of increase in productivity that came with the “digital age” of computing, but it is simple. We don’t use them as computers. Now we can.
For the first time now, normal people with their normal problems can use their computers to solve and automate their problems. AI can write code. AI can automate their tedium. The digital age did not bring any gains because it had no yet arrived. We were living through the last gasp of the industrial economy.
It is now here.
This technology will unleash unimaginable productivity gains. It will level the playing field between coders and the rest of us. Coders will lose their jobs, to be sure, but for the rest of us, the bundle of workplace tasks will become much better.
And truthfully, the demand for real computer scientists will probably increase in the era of vibe-coding. Computer science itself is a bundle of skills, of which coding is just one. The more important skill – software and data architecture – will only increase in demand as the usefulness of software expands…
[Hyman goes on to explore the dangers of monopolization (which, for reasons he explains, he believes are overstated); the future of softward (which, he believes, will skew to open-sorce), and of hardware (which, he believes will not be a bottleneck). He concludes…]
… Put together we come to a very different picture of what the digital age will be. The industrial age required massive investments to build the factories to make the products that were in demand. In the digital age, in contrast, the factories to build digital products will be made by the AI on your laptop. That is not inequality. That is equality.
The physical products of the Fordist industrial age were made for the mass market. In contrast, the digital products of the post-fordist digital age will be long-tail products. I don’t need to make mass market products; I can make them for a small niche, or just for myself.
Rather than fostering inequality, AI, then, is a great equalizer. To make products for a global market you don’t need a billion-dollar factory. You just need a laptop. That is astonishing.
That said, it will not be all sunshine and rainbows. Will AI solve the inequities of capitalism or its reliance on externalities as a source of primitive accumulation? Probably not.
But at the same time, AI is not a normal technology in that it has the potential to radically undermine many of the tendencies to concentrate capital that we have seen in the industrial age. We have been automated out of work before, that is nothing new, but it has always concentrated capital in the hands of the few. For the first time, there is potentially an alternative path forward.
AI will bring the digital age out of the hands of the coders. AI will not widen the gap—it will bridge it. Its ubiquity will mean that AI will be a tool that nearly all of us will be able to use in our daily work, which will make ordinary people more productive and prosperous…
Eminently worth reading in full: “Hooray! Post-Fordism Is Finally Here!“
Even as Hyman’s message is reassuring in the context of the flood of jeremiads in which we’re awash, it’s worth remembering that eerily-similar points were made a couple of decades ago about the threat/promise of digital publishing/commerce. Given the then-current conditions and then-plausible futures, those predictions might have come true… but in the event, they didn’t pan out as projected. That said, things are changing, so maybe this time things are different?
(Image above: source)
* song (by Eric Idle) from Monty Python’s Life Of Brian
###
As we resolve to remain rosy, we might send productive birthday greetings to Andrew Meikle; he was born on this date in 1719. A Scottish millwright, he invented the threshing machine (for removing the husks from grain, as mentioned above). One of the key developments of the British Agricultural Revolution in the late 18th century., it was also one of the main causes of the Swing Riots— an 1830 uprising by English and Scottish agricultural workers protesting agricultural mechanization and harsh working conditions.








You must be logged in to post a comment.