(Roughly) Daily

“It is by the deep, hidden currents that the oceans are made one”*…

The global conveyor belt, shown in part here, circulates cool subsurface water and warm surface water throughout the world. The Atlantic Meridional Overturning Circulation is part of this complex system of global ocean currents. This illustration is captured from a short video produced by NOAA Science on a Sphere.

A significant part of the earth’s climate infrastructure is under threat. New research suggests the Atlantic Meridional Overturning Circulation (or AMOC) could weaken by half this century with wide ranging consequences for weather, food, and sea levels across the world. Alison Smart and Charlotte Venner unpack the past and ponder the future of this critical ocean current…

London, England, and Quebec City, Canada sit at roughly the same latitude (51°N and 47°N, respectively) but have vastly different climates. Historically, Quebec City had 99 freezing days in an average year—weather you might expect from its relative proximity to the Arctic—but London only experienced three freezing days in an average year, despite being slightly further north. This difference is largely due to an ocean current called the Atlantic Meridional Overturning Circulation (AMOC), which distributes warmth from the Tropics via the Atlantic Ocean.

Now, impacts from climate change are weakening the AMOC, and it could collapse entirely in the near future. AMOC collapse would rapidly make regions of the Northern Hemisphere with historically mild weather colder and harsher, while triggering irreversible changes in the global climate. 

The AMOC is both the product of a stable climate and a factor in maintaining weather patterns around the planet. To plan for future scenarios, we need to first understand how the AMOC works and what might happen if it collapses…

[Smart and Venner explain the AMOC and outline the ways in which it shapes the climate of regions around the world…]

… Even minor weakening of the AMOC can significantly impact local climates, as has happened several times in the past 12,000 years. A “Little Ice Age” occurred in Europe in the Middle Ages, likely connected to a disruption in the AMOC. Just a slight slowdown in the AMOC could make Europe colder overall, disrupt global precipitation patterns from South America to India, and worsen drought in Africa.

The more freshwater pours into the ocean, and the more ocean temperatures rise, the weaker the AMOC becomes—until, at some threshold, it could stop moving altogether. 

It is possible that the AMOC will collapse entirely if warming continues. There is no agreed-upon global average temperature at which collapse becomes certain, but there are signals we can track and historical examples we can examine to predict the likelihood of collapse…

… The consequences of total AMOC collapse would be far-reaching, severe, and irreversible on timescales relevant to humans. AMOC collapse would cool parts of the Northern Hemisphere and warm parts of the Southern Hemisphere by multiple degrees Celsius and drastically alter weather around the world.

In Europe, winter temperatures would drop, cold snaps could increase, and winter storms would intensify. A 2025 research letter found that, even if global warming reached 2°C, AMOC collapse would make Europe colder than it is today, creating extreme winters in Northwestern Europe in which record cold might reach -20°C (-4°F) in London and -50°C (-58°F) in Scandinavia. Even milder cold days would increase, with approximately 150 to 180 frost days per year in Utrecht, Netherlands, compared to a historic average of about 53. Precipitation would likely shift and decrease, potentially drying out some parts of Europe and making others wetter. 

Around the world, other climates would change, likely in less extreme ways.

  • North America. The East Coast of North America would likely experience rapid sea level rise as the gravitational pull of the AMOC weakens, as well as cooler conditions, with some parts of Eastern Canada and the North Atlantic coast cooling by several degrees Celsius, erratic storms, weather variability, and more intense hurricanes.
  • Tropics & South America. Without the AMOC, the ITCZ would shift south, potentially leading to drying in the Northern Tropics and parts of the Amazon and wetter conditions in the Southern Tropics. 
  • Africa. Because of the shift in the ITCZ, West Africa and the Sahel would be much drier, experiencing severe and frequent drought and reduced rainy seasons. The Sahel could possibly transition from a semi-arid climate to hot dry desert. 
  • Asia. Because of the shift in the ITCZ, weakened and more erratic monsoons in Asia would lead to increased drought and a higher risk of extreme precipitation events.

These changes may occur rapidly, create climate risks, and cause systemic disruption in affected regions. The collapse of the AMOC would also be a tipping point in the global climate, meaning that the changes would likely be difficult, if not impossible, to reverse on human timescales.

Once the AMOC passes a critical threshold of weakening, called a tipping point, it would continue to weaken until it collapses. AMOC collapse could also create systemic impacts that activate other tipping points as well as feedback loops that could generate further warming. 

For example, if AMOC collapse contributed to changes like a permanent dieback of the Amazon Rainforest or increased ice loss, those changes would generate their own warming effect on Earth’s climate. A 2026 paper suggests that AMOC collapse would result in substantial carbon release from oceans and add around 0.2°C in additional atmospheric warming.

Reducing greenhouse gas emissions may slow warming enough to reduce weakening and delay collapse. If collapse begins, it is unlikely we could stop it. There is no feasible technological way to reengineer ocean currents…

A bracing, but important read: “A complete guide to the Atlantic Meridional Overturning Circulation (AMOC).”

See also: “What would happen if the Atlantic Meridional Overturning Circulation (AMOC) collapses? How likely is it?” 

Rachel Carson

###

As we put on our sailin’ shoes, we might send interconnected birthday greetings to Andrew Sharrett; he was born on this date in 1946. An archaeologist, his application of world-systems theory to questions of change on large, often global, scale made him one of the most influential archaeologists of the late 20th/early 21st centuries. Sharrett is best known for his theory of the secondary products revolution; but his work touched on a broad range of fundamental human developmental issues: global migration and colonization, the spread of agriculture, the development of metallurgy and urbanism, and the development of new forms of consumption, to name a few. All of those dynamics were, as Sharrett observed, shaped in significant ways by the climatic conditions in which they unfolded.

source

“I cannot teach anybody anything. I can only make them think.”*…

Death of Socrates, Jacques-Louis David (source)

Benjamin Ross Hoffman puts “the Socratic Method” into context– important, timely context…

There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency. [or impiety– see here] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk.

Euthyphro’s confidence is striking. His own family thinks it is indecent for a son to prosecute his father; Euthyphro insists that true decency demands it, that he understands what the gods require better than his relatives do. Socrates, who is about to be tried for teaching indecency toward the gods, asks Euthyphro to explain what decency actually is, since Euthyphro claims to know, and Socrates will need such knowledge for his own defense.

Euthyphro’s first answer is: decency is what I am doing right now, prosecuting wrongdoers regardless of kinship. Socrates points out that this is an example, not a definition. There are many decent acts; what makes them all decent?

Euthyphro tries again: decency is what the gods love. But the gods disagree among themselves, Socrates observes, so by this definition the same act could be both decent and indecent. Euthyphro refines: decency is what all the gods love. And here Socrates asks a question Euthyphro cannot answer: do the gods love decent things because they are decent, or are things decent because the gods love them?

If decent things are decent because the gods love them, then decency is arbitrary, a matter of divine whim. Socrates is too polite to say so, but the implication is: if decency is defined by the arbitrary whim of our betters, who are you to prosecute your father?

If the gods love decent things because they are decent, then however we know this, we already know the standard for decency ourselves and can cut out the middleman. But then Euthyphro should be able to explain the standard. He can’t.

Euthyphro tries a few more times, suggesting that decency is a kind of service to the gods, a kind of trade with the gods. Each time Socrates gently follows the definition to its consequences, and each time it collapses. Eventually Euthyphro leaves, saying he is in a hurry. Socrates’ last words are a lament: you have abandoned me without the understanding I needed for my own defense.

This is usually read as a proto-academic dialogue about definitions. It is a scene from a civilization in crisis. A man is about to use the legal system to destroy his own father on the basis of a concept he cannot define, in a courthouse where another man is about to be destroyed by the same concept. And the man who cannot define it is not unusual. He is representative.

The indecency for which Socrates was being prosecuted seems to have consisted of asking just the sort of questions Socrates posed to Euthyphro…

[Hoffman sketches the culture and politics of Athens in the late fifth century, the role of the Sophists, and the (radical) role that Socrates played…]

… Plato also responded to his beloved mentor’s death by founding the Academy, a great house in Athens where philosophical reasoning was taught methodically. We still have our Academics.

Agnes Callard, in her recent book Open Socrates, wants Socrates to be timeless. She strips out the historical situation, strips out the aliveness that preceded the method, and ends up defending a method that’s obviously inapplicable in many of the cases where she claims it applies. Aristarchus did not need his assumptions questioned at random. He needed someone who could ask probing questions about his actual problem, from a perspective that didn’t share his assumptions about what was and wasn’t possible.

Zvi Mowshowitz, in his review of Callard’s book (part 1, part 2), argues at considerable length that the decontextualized version is bad. He is right. Cached beliefs are usually fine. Destabilizing them is usually harmful. Most people do not want to spend their lives in Socratic questioning, and they are right.

But Zvi has written a long polemic in two installments on the winning side of an incredibly lame debate about whether we should anxiously doubt ourselves all the time, responding to Callard’s decontextualized Socrates, not the real one. The real one did not devise a method and then apply it. He had a quality, something the oracle reached for the language of the tragedians to describe. And what was memorialized as a “method” was what happened when that quality met a city where every other participant in public life had stopped being alive.

Socrates invokes timeless considerations like logical coherence, and committing (even provisionally) to specific claims; these are very natural things to try to appeal to when people are being squirmy, dramatic, hard to pin down, and fleeing to abstractions that resist falsification.

Spinoza, in the Theologico-Political Treatise, similarly resituated the teachings of Jesus of Nazareth in their proper context. The political teachings of the Gospels to turn the other cheek, forgive debts, and render unto Caesar what is due to him, are instructions for people living under a hostile and extractive system of domination. Citizens of a free republic have entirely different duties. They have an affirmative obligation to hold each other accountable, to sue people who have wronged them, to participate in collective self-governance. The teachings are not wrong. They are addressed to a specific situation, and become wrong when mechanically transplanted into an inappropriate context.

The reason to recover the historical Socrates is not only accuracy about the distant past; it is that by seeing this relevant aspect of the past more clearly, we might see more clearly what we are up against now.

Socratic cross-examination requires an interlocutor who at least would feel ashamed not to put on a show of accountability. The people Socrates questioned were performing wisdom, but they were performing it because the culture still demanded that leaders seem accountable. They would sit for the examination, because refusing would be disgraceful, like breaking formation in a hoplite phalanx. Their scripts collapsed because the scripts were designed to look like real accountability, and real accountability is what Socrates brought.

There is a useful framework for understanding how public discourse degrades, which distinguishes between guilt, shame, and depravity. A guilty person has violated a norm and intends to repair the breach by owning up and making amends. An ashamed person intends to conceal the violation, which means deflecting investigation. A depraved person has generalized the intent to conceal into a coalitional strategy: I will cover for you if you cover for me, and together we will derail any investigation that threatens either of us.

The leaders Socrates questioned were, at worst, ashamed. They had taken on roles they couldn’t account for, and they wanted to hide that fact, but they still felt the force of the demand for accountability. When Socrates pressed them, they squirmed, they went in circles, they eventually fled. But they engaged. They felt they had to engage. The culture of Athens, even in its degraded state, still held that a man who refused to give an account of his claims was disgraced.

Depravity is a further stage, and Sartre described it precisely in his book Anti-Semite and Jew:

Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.

The depraved person does not perform accountability. He plays with the forms of accountability to exhaust and humiliate the person who still takes them seriously. He is not running a script that is trying to pass as a perspective, collapsing only under the kind of questioning we still call Socratic. He is amusing himself at the expense of the questioner. Cross-examination does not expose him, because he was never trying to seem consistent. He was trying to demonstrate that consistency is for suckers. The Socratic method will not help him.

The Socratic method, if we can rightly call it that, was forged by the pressures confronted by a living mind in a city of the ashamed, people who still cared enough about accountability to fake it. It has nothing to say to the depraved themselves, who have dispensed with the pretense, though in a transitional period might expose them to the judgment of the naïve.

But the quality that preceded the method is something else.

What the oracle recognized in Socrates was not the ability to cross-examine. It was something closer to what it recognized in Euripides: the capacity to be present to what is happening, to see the person in front of you rather than the drama you are supposed to enact with them, to respond to the situation rather than to your script about the situation. To be alive.

We do not need a new method. Methods are what you formalize after you understand the problem, and we are not there yet. What might still help us is the quality that precedes method: the willingness to see what is in front of us, to say the obvious thing that everyone embedded in the performance is too scripted to see, and to keep reaching out to others even when the response is usually not even embarrassment but indifference, not even a failed defense but a smirk.

The oracle didn’t say Socrates had the best method. It said he was the wisest man, in a society oriented against wisdom. The “method” was just how aliveness was memorialized by a city that still cared enough to be ashamed of being dead.

The question for us is what aliveness looks like in a city beyond shame…

Eminently worth reading in full.

The Socratic Method and the importance of recognizing and responding to the times in which we live: “Socrates is Mortal

See also: “The real reason Socrates was given the death sentence– humiliating powerful people was not a key to success

Apposite: “What Separates The Great From The Petty In History” (“embracing the relentless ally of reality makes all the difference”)

* Socrates

###

As we inhabit our moment, we might send thoughtful birthday greetings to David Hume; he was born on this date in 1711. A philosopher, historian, economist, and essayist, he developed a highly-influential system of empiricismphilosophical scepticism, and metaphysical naturalism.

Hume strove to create a naturalistic science of man that examined the psychological basis of human nature. Hume followed John Locke in rejecting the existence of innate ideas, concluding that all human knowledge derives solely from experience; this places him amongst such empiricists as Francis BaconThomas Hobbes, Locke, and George Berkeley.

Hume argued that inductive reasoning and belief in causality cannot be justified empirically; instead, they result from custom and mental habit. People never actually perceive that one event causes another but experience only the “constant conjunction” of events. This problem of induction means that to draw any causal inferences from past experience, it is necessary to presuppose that the future will resemble the past; this metaphysical presupposition cannot itself be grounded in prior experience.

An opponent of philosophical rationalists, Hume held that passions rather than reason govern human behaviour, proclaiming that “Reason is, and ought only to be the slave of the passions.” Hume was also a sentimentalist who held that ethics are based on emotion or sentiment rather than abstract moral principle. He maintained an early commitment to naturalistic explanations of moral phenomena and is usually accepted by historians of European philosophy to have first clearly expounded the is–ought problem, or the idea that a statement of fact alone can never give rise to a normative conclusion of what ought to be done.

Hume denied that people have an actual conception of the self, positing that they experience only a bundle of sensations and that the self is nothing more than this bundle of perceptions connected by an association of ideas. Hume’s compatibilist theory of free will takes causal determinism as fully compatible with human freedom. His philosophy of religion, including his rejection of miracles and critique of the argument from design, was especially controversial. Hume left a legacy that affected utilitarianism, logical positivism, the philosophy of science, early analytic philosophycognitive sciencetheology and many other fields and thinkers. Immanuel Kant credited Hume as the inspiration that had awakened him from his “dogmatic slumbers.”

– source

Apropos the piece featured above, see Peter Kreeft‘s Socrates Meets Hume- The Father of Philosophy Meets
The Father of Modern Skepticism
(“A Socratic Examination of [Hume’s] An Enquiry Concerning Human Understanding“)

Written by (Roughly) Daily

May 7, 2026 at 1:00 am

“The web of our life is of a mingled yarn”*…

In what does our personhood consist? From what/where does it come? João de Pina Cabral unpacks the seminal thinking of Lucien Lévy-Bruhl and the advances in cognitive science and developmental psychology that suggest that a person is not self-contained, but the outcome of a lifelong process of living with others…

It matters to understand what constitutes a person. After all, if there is one feature that distinguishes human society from other forms of sociality, it is that, at around one year of age, most human beings attain personhood: they learn to speak a language, develop object permanence – the understanding that things do not disappear when out of sight – and relate to others in consciously moral ways. Should all persons be accorded the same rights and duties by virtue of this condition? These are weighty questions that have occupied social scientists and philosophers since antiquity – particularly at moments such as the present, when war and imperial oppression once again raise their ugly heads.

Nevertheless, this question cannot be approached as a purely moral matter, for in order to determine what rights and duties may be attributed to persons, it is necessary to establish what persons are. This longstanding perplexity can now be addressed in increasingly sophisticated ways, following a century of sustained anthropological enquiry.

In September 1926, two of the most eminent anthropologists of the day met in person for the first time in New York. Both were Jewish and born in Europe, but one – Franz Boas – had become an American citizen and was a leading figure at Columbia University in New York, while the other – Lucien Lévy-Bruhl – was a professor in Paris. Both were highly learned, humanistically inclined and politically liberal; they respected one another, yet they did not seem to agree about the matter of the person.

Lévy-Bruhl had begun his career as a philosopher of ethics. His doctoral thesis focused on the legal concept of responsibility. He was struck by the fact that responsibility first arose between persons not as a law, but as an emotion – a deep-seated feeling. He argued that co-responsibility implies a bond between persons grounded less in reason than in the conditions of their emergence as persons. As children, individuals do not emerge out of nothing, but through deep engagement with prior persons – their caregivers. Thus, moral responsibility could not have arisen from adherence to norms or rules; rather, norms and rules emerged from the sense of responsibility that humans acquire as they become persons.

This led him to question how we become thinking beings. Do all humans, after all, think in the same way? He began reading the increasingly sophisticated ethnographic accounts emerging from Australia, Africa, Asia and South America, and was deeply influenced by an extended trip to China. He was an empirical realist, but also a personalist – that is, he accorded primacy to the person as such, refusing to subsume the individual into the group. In this respect, he was not persuaded by the arguments of the great sociologist Émile Durkheim concerning the exceptional status of the ‘sacred’ or the special powers of ‘collective consciousness’. Lévy-Bruhl soon arrived at a striking conclusion: in their everyday practices and especially in their ritual actions, the so-called ‘primitive’ peoples studied by ethnographers did not appear to conform to the norms of logic that had been regarded as universally valid since the time of Aristotle.

As a friend of his put it, Lévy-Bruhl discovered that such peoples are characterised by ‘a mystical mentality – full of the “supernatural in nature” and prelogic, of a different kind than ours’. Indeed, the basic principles of Aristotelian logic that continue to guide scientific thinking – underpinning modern technological development – seemed to be ignored by premodern peoples. Aristotle’s law of the excluded middle (p or not-p) did not appear to apply to their ‘mystical’ modes of thought, both because they tended to think in terms of concrete objects rather than abstractions, and because they exhibited what Lévy-Bruhl termed ‘participation’…

[de Pina Cabral traces the development of Lévy-Bruhl’s thought, starting with Plato’s concept of methexis; elaborates on Lévy-Bruhl’s ideas; and traces te advances in cognitive science and developmental psychology that support them…]

… the very experience of personhood – that is, the sense that I am myself – is not ‘individual’, since its emergence presupposes a prior condition of being-with others. The self arises from a sharing of being with others, from having been part of those who are close to us. One does not emerge as an addition to society, but rather as a partial separation from the participations that initially constituted one’s being.

As I become a person, I learn to relate to myself as an other; I transcend my immediate position in the world. Without this, I would not be able to speak a language, since the use of pronouns presupposes reflexive thought. Thus, as Lévy-Bruhl had already insisted in his notebooks, participation precedes the person. Intersubjectivity is not the meeting of already constituted subjects, but the ground from which subjectivity emerges. Participation, therefore, may be understood as the constitutive tension between the singular and the plural in the formation of the person in the world. In 1935, the great phenomenologist Edmund Husserl expressed this insight clearly in a letter to Lévy-Bruhl where he thanked him for his ideas on participation:

Saying ‘I’ and ‘we’, [persons] find themselves as members of families, associations, [socialities], as living ‘together’, exerting an influence on and suffering from their world – the world that has sense and reality for them, through their intentional life, their experiencing, thinking, [and] valuing.

In acting and being acted upon together in human company during the first year of life, children become ‘we’ at the same time as they become ‘I’, which means that persons are always, ambivalently, both ‘I’ and ‘we’. Participation and transcendence will remain sources of theoretical perplexity for as long as the ‘we’ is approached as a categorical matter – a question of ‘identity’ – rather than as the presence and activity of living persons in dynamic interaction with the world and with one another.

By contrast, once we accept that personhood is the outcome of a process – the encounter between the embodied capacities of human beings and the historically constituted world that surrounds them – participation loses its mystery. As Lévy-Bruhl put it in one of his final notes: ‘The impossibility for the individual to separate within himself what would be properly him and what he participates in in order to exist …’ Participation, therefore, is the ground upon which everyday social interaction is constituted. The ‘mystical’ (or transcendental) potential within each of us – that which animates the symbolic life of groups – is part of the very process through which each of us becomes ourselves…

How does one become a person? “We” before “I”: “To be is to participate,” from @aeon.co.

A (if not the) next question: how does personhood emerge when the formative interactions are increasingly mediated/attentuated by technology?

* Shakespeare, All’s Well That Ends Well, Act 4, Scene 3

###

As we get together, we might send behaviorist birthday greetings to a man whose work focused on how one might train the “persons” who emerge: Kenneth Spence; he was born on this date in 1907. A psychologist, he worked to construct a comprehensive theory of behavior to encompass conditioning and other simple forms of learning and behavior modification.

Spence attempted to establish a precise, mathematical formulation to describe the acquisition of learned behavior, trying to measure simple learned behaviors (e.g., salivating in anticipation of eating). Much of his research focused on classically conditioned, easily measured, eye-blinking behavior in relation to anxiety and other factors.

One of the leading theorists of his time, Spence was the most cited psychologist in the 14 most influential psychology journals in the last six years of his life (1962 – 1967).  A Review of General Psychology survey, published in 2002, ranked Spence as the 62nd most cited psychologist of the 20th century.

source

Written by (Roughly) Daily

May 6, 2026 at 1:00 am

“Always look on the bright side of life”*…

The estimable economic historian Louis Hyman has been engaged in an on-going “friendly debate” with his equally-estimable friend and Johns Hopkins colleague Rama Chellappa on “what AI means”…

… As I see this debate, this question of our age, there are two main questions that history can shed some light on.

  1. Is AI a complement or a substitute for labor? That is, will it increase demand for and the productivity of workers, or decrease it?
  2. Will AI be controlled by the few or be accessible to the many?

A Complement or a Substitute?

Consider a some of the most important technologies of the past 200 years.

When I am asked about what automation might look like, I inevitably discuss agriculture. Roughly all of our ancestors were farmers and approximately none of us today are. Yet we still eat bread made from wheat. That shift is possible because of automation.

The mechanical thresher, used to process wheat, was a substitute for the most backbreaking work of the harvest. But it also enabled more land to be cultivated, and that land was cultivated more efficiently, allowing for greater harvests. Mechanization of the farm, like the thresher, turned the American Midwest into the breadbasket of the world.

Those displaced farmers found work on railroads, moving all that. And those jobs, according to people at the time, were a kind of liberation from the raw animal labor of threshing. On net, it created demand for more workers at better wages in work more fit for people than beasts. For those that remained farmers, they found other higher-value work to be done. On a farm, there is always more work to do.

The failure, then and now, is to think farmers were only threshers. That was one part of their jobs. Today, our work, for most people, is also a bundle of tasks. Workers then and now could and can focus on parts of their job that are of higher value. And in a new economy, new tasks in new industries will be created. Many of the jobs that we do today (web designer, UI expert) were simply unimaginable in 1850. That is a good thing.

Consider now the assembly line. I’m sure you all know about the staggering increases in productivity that come from the division of labor. If you take my class in industrial history, you would learn deeply about the story of the automobile. With the assembly line, and no other change in technology, car assembly went from 12 and a half hours to about 30 minutes (once they worked out the kinks). Did this reduce the demand for workers? No. It reduced the price of cars. And that increased the demand for workers, who eventually could demand even higher wages through unionization.

It is important here to realize that better tools don’t make us get paid worse. They generally make us get paid more. Why? Because the tool, without the person, is useless. Even for today’s most cutting-edge AIs, that is true. It can code, but it can only code what I imagine it to code. It can draw, but only what I imagine it to draw. That is true for AIs as it was true for the thresher.

So, I would offer that AI will create more growth, more abundance. In the long run, all growth comes from higher productivity.

I would add one more piece to this story. Economic inequality has worsened since roughly 1970. It has worsened, therefore, not in the industrial era, but the digital era. I have argued elsewhere that this happened because for decades we did not use computers as tools of automation but as glorified typewriters (and then as televisions). Our productivity did not increase, especially to justify the expense of computers. Economists have debated for decades now over the lack of increase in productivity that came with the “digital age” of computing, but it is simple. We don’t use them as computers. Now we can.

For the first time now, normal people with their normal problems can use their computers to solve and automate their problems. AI can write code. AI can automate their tedium. The digital age did not bring any gains because it had no yet arrived. We were living through the last gasp of the industrial economy.

It is now here.

This technology will unleash unimaginable productivity gains. It will level the playing field between coders and the rest of us. Coders will lose their jobs, to be sure, but for the rest of us, the bundle of workplace tasks will become much better.

And truthfully, the demand for real computer scientists will probably increase in the era of vibe-coding. Computer science itself is a bundle of skills, of which coding is just one. The more important skill – software and data architecture – will only increase in demand as the usefulness of software expands…

[Hyman goes on to explore the dangers of monopolization (which, for reasons he explains, he believes are overstated); the future of softward (which, he believes, will skew to open-sorce), and of hardware (which, he believes will not be a bottleneck). He concludes…]

… Put together we come to a very different picture of what the digital age will be. The industrial age required massive investments to build the factories to make the products that were in demand. In the digital age, in contrast, the factories to build digital products will be made by the AI on your laptop. That is not inequality. That is equality.

The physical products of the Fordist industrial age were made for the mass market. In contrast, the digital products of the post-fordist digital age will be long-tail products. I don’t need to make mass market products; I can make them for a small niche, or just for myself.

Rather than fostering inequality, AI, then, is a great equalizer. To make products for a global market you don’t need a billion-dollar factory. You just need a laptop. That is astonishing.

That said, it will not be all sunshine and rainbows. Will AI solve the inequities of capitalism or its reliance on externalities as a source of primitive accumulation? Probably not.

But at the same time, AI is not a normal technology in that it has the potential to radically undermine many of the tendencies to concentrate capital that we have seen in the industrial age. We have been automated out of work before, that is nothing new, but it has always concentrated capital in the hands of the few. For the first time, there is potentially an alternative path forward.

AI will bring the digital age out of the hands of the coders. AI will not widen the gap—it will bridge it. Its ubiquity will mean that AI will be a tool that nearly all of us will be able to use in our daily work, which will make ordinary people more productive and prosperous…

Eminently worth reading in full: “Hooray! Post-Fordism Is Finally Here!

Even as Hyman’s message is reassuring in the context of the flood of jeremiads in which we’re awash, it’s worth remembering that eerily-similar points were made a couple of decades ago about the threat/promise of digital publishing/commerce. Given the then-current conditions and then-plausible futures, those predictions might have come true… but in the event, they didn’t pan out as projected. That said, things are changing, so maybe this time things are different?

(Image above: source)

* song (by Eric Idle) from Monty Python’s Life Of Brian

###

As we resolve to remain rosy, we might send productive birthday greetings to Andrew Meikle; he was born on this date in 1719. A Scottish millwright, he invented the threshing machine (for removing the husks from grain, as mentioned above). One of the key developments of the British Agricultural Revolution in the late 18th century., it was also one of the main causes of the Swing Riots— an 1830 uprising by English and Scottish agricultural workers protesting agricultural mechanization and harsh working conditions.

Threshing machine, invented by Andrew Meikle (source)

“Something that doesn’t actually exist can still be useful”*…

Gregory Barber on ultrafinitism, a philosophy that rejects the infinite. Ultrafinitism has long been dismissed as mathematical heresy, but it is also producing new insights in math and beyond…

Doron Zeilberger is a mathematician who believes that all things come to an end. That just as we are limited beings, so too does nature have boundaries — and therefore so do numbers. Look out the window, and where others see reality as a continuous expanse, flowing inexorably forward from moment to moment, Zeilberger sees a universe that ticks. It is a discrete machine. In the smooth motion of the world around him, he catches the subtle blur of a flip-book.

To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is. Equations define lines that carry on off the chalkboard, but to where? Proofs are littered with suggestive ellipses. These equations and proofs are, according to Zeilberger — a longtime professor at Rutgers University and a famed figure in combinatorics — both “very ugly” and false. It is “completely nonsense,” he said, huffing out each syllable in a husky voice that seemed worn out from making his point.

As a matter of practicality, infinity can be scrubbed out, he contends. “You don’t really need it.” Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely. Curves might look smooth, but they hide a fine-grit roughness; computers handle math just fine with a finite allowance of digits. (Zeilberger lists his own computer, which he named “Shalosh B. Ekhad,” as a collaborator on his papers.) With infinity eliminated, the only thing lost is mathematics that was “not worth doing at all,” Zeilberger said.

Most mathematicians would say just the opposite — that it’s Zeilberger who spews complete nonsense. Not just because infinity is so useful and so natural to our descriptions of the universe, but because treating sets of numbers (like the integers) as actual, infinite objects is at the very core of mathematics, embedded in its most fundamental rules and assumptions.

At the very least, even if mathematicians don’t want to think about infinity as an actual entity, they acknowledge that sequences, shapes, and other mathematical objects have the potential to grow indefinitely. Two parallel lines can in theory go on forever; another number can always be added to the end of the number line.

Zeilberger disagrees. To him, what matters is not whether something is possible in principle, but whether it is actually feasible. What this means, in practice, is that not only is infinity suspect, but extremely large numbers are as well. Consider “Skewes’ number,” eee79. This is an exceptionally large number, and no one has ever been able to write it out in decimal form. So what can we really say about it? Is it an integer? Is it prime? Can we find such a number anywhere in nature? Could we ever write it down? Perhaps, then, it is not a number at all.

This raises obvious questions, such as where, exactly, we will find the end point. Zeilberger can’t say. Nobody can. Which is the first reason that many dismiss his philosophy, known as ultrafinitism. “When you first pitch the idea of ultrafinitism to somebody, it sounds like quackery — like ‘I think there’s a largest number’ or something,” said Justin Clarke-Doane, a philosopher at Columbia University.

“A lot of mathematicians just find the whole proposal preposterous,” said Joel David Hamkins, a set theorist at the University of Notre Dame. Ultrafinitism is not polite talk at a mathematical society dinner. Few (one might say an ultrafinite number) work on it. Fewer still are card-carrying members, like Zeilberger, willing to shout their views out into the void. That’s not just because ultrafinitism is contrarian, but because it advocates for a mathematics that is fundamentally smaller, one where certain important questions can no longer be asked.

And yet it gives Hamkins and others a good deal to think about. From one angle, ultrafinitism can be seen as a more realistic mathematics. It is math that better reflects the limits of what people can create and verify; it may even better reflect the physical universe. While we might be inclined to think of space and time as eternally expansive and divisible, the ultrafinitist would argue that these are assumptions that science has increasingly brought into question — much as, Zeilberger might say, science brought doubt to God’s doorstep.

“The world that we’re describing needs to be honest through and through,” said Clarke-Doane, who in April 2025 convened a rare gathering of experts to explore ultrafinitist ideas. “If there might only be finitely many things, then we’d better also be using a math that doesn’t just assume that there are infinitely many things at the get-go.” To him, “it sure seems like that should be part of the menu in the philosophy of math.”

For mathematicians to take it seriously, though, ultrafinitists first need to agree on what they’re talking about — to turn arguments that sound like “bluster,” as Hamkins puts it, into an official theory. Mathematics is steeped in formal systems and common frameworks. Ultrafinitism, meanwhile, lacks such structure.

It is one thing to tackle problems piecemeal. It is quite another to rewrite the logical foundations of mathematics itself. “I don’t think the reason ultrafinitism has been dismissed is that people have good arguments against it,” Clarke-Doane said. “The feeling is that, oh, well, it’s hopeless.”

That’s a problem that some ultrafinitists are still trying to address.

Zeilberger, meanwhile, is prepared to abandon mathematical ideals in favor of a mathematics that’s inherently messy — just like the world is. He is less a man of foundational theories than a man of opinions, of which he lists 195 on his website. “I cannot be a tenured professor without doing this crackpot stuff,” he said. But one day, he added, mathematicians will look back and see that this crackpot, like those of yore who questioned gods and superstitions, was right. “Luckily, heretics are no longer burned at the stake.”…

Read on for the history of ultrafinitism, the critical dialogue surrounding it, and its implications: “What Can We Gain by Losing Infinity?” from @gregbarber.bsky.social in @quantamagazine.bsky.social.

* Ian Stewart (whose point was somewhat different from Zeilberger’s :-), Infinity: A Very Short Introduction

###

As we engage the endless, we might spare a thought for a man whose work touched on the infinitesimal, Isaac Barrow; he died on this date in 1677. A theologian and mathematician, he played a key role in the development of infinitesimal calculus (in particular, for a proof of the fundamental theorem of calculus). Barrow was the inaugural holder of the prestigious Lucasian Professorship of Mathematics at the University of Cambridge, a post later held by his student, Isaac Newton (who, of course, shares primary credit for the development of calculus with Gottfried Wilhelm Leibniz).

source