(Roughly) Daily

Posts Tagged ‘Wittgenstein

“Why is God making me suffer so much? Just because I don’t believe in him?”*…

David Shatz on an important important– and surely the funniest– modern philosopher…

Many have heard the story about the British philosopher [Oxford linguisitic philosopher J. L. Austin] who asserted in a lecture that, whereas in many languages a double negative makes a positive, in no language does a double positive make a negative. Instantly, from the back of the room, a voice piped up, “Yeah, yeah.”

While the story is well-known—and true—many do not know that the “yeah, yeah” came from Sidney Morgenbesser (1921-2004), a professor at Columbia University who later became the John Dewey Professor of Philosophy, and whose 10th yahrzeit will be marked this summer. Those who did not experience Morgenbesser could not fully appreciate James Ryerson’s words in his superb portrait in the “The Lives They Lived” issue of The New York Times Magazine: “The episode was classic Morgenbesser: The levity, the lightning quickness, the impatience with formality in both thought and manners, the gift for the knockout punch.” (Ryerson has long been working on a book about Morgenbesser.) Nor could most people know that this comic genius was revered by philosophers and other literati, including people of eminence and fame, as one of the truly spectacular philosophical minds of his time—someone whom, reportedly, no less a figure than Bertrand Russell considered one of the cleverest (that’s British for “smartest”) young men in the United States…

A man who would surely have tickled Wittgenstein’s funny bone: “‘Yeah, Yeah’: Eulogy for Sidney Morgenbesser, Philosopher With a Yiddish Accent,” in @tabletmag.

A few other examples of Morgenbesser’s wit:

• Morgenbesser in response to B. F. Skinner: “Are you telling me it’s wrong to anthropomorphize people?”

• In response to Leibniz’s ontological query “Why is there something rather than nothing?” Morgenbesser answered “If there were nothing you’d still be complaining!”

• Interrogated by a student whether he agreed with Chairman Mao’s view that a statement can be both true and false at the same time, Morgenbesser replied “Well, I do and I don’t.”

* Morgenbesser, a few weeks before his death from complications of ALS, to his friend and Columbia philosophy colleague David Albert


As we laugh and learn, we might recall that on this date in 1979, “Ring My Bell” was atop the pop charts.

Written by Frederick Knight, the composition was originally intended for then eleven-year-old Stacy Lattisaw, as a teenybopper song about kids talking on the telephone.  But when Lattisaw signed with a different label, Anita Ward was asked to sing it instead.

“Ring My Bell” went to number one on the Billboard Hot 100 chart, the Disco Top 80 chart, and the Soul Singles chart.  It also reached number one on the UK Singles Chart.  And it garnered Ward a nomination for Best Female R&B Vocal Performance at the 1980 Grammy Awards. It was her only hit.

See (and, of course, hear) Ward perform the song here.


Written by (Roughly) Daily

June 30, 2023 at 1:00 am

“The limits of my language means the limits of my world”*…

It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…

In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.

But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…

What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?

What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…

There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.

Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.

Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…

Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.

(Image above: source)

* Ludwig Wittgenstein, Tractatus logigo-philosphicus


As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.

Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.


“You shouldn’t rely on what you believe to be true. You might be mistaken. Everything can be questioned, everything doubted. The best option, then, is to keep an open mind.”*…

The ancient Sceptics– often called Pyrrhonists after Pyrrho, the ancient Greek master Sceptic who lived in the 4th and 3rd centuries BCE– used doubt as a way of investigating the world. As Mahdi Ranaee explains, later thinkers undermined even that possibility…

Ask any philosopher what scepticism is, and you will receive as many different answers as people you’ve asked. Some of them take it to be showing that we cannot have any knowledge – of, say, the external world – and some of them take it to be even more radical in showing that we cannot have any reasonable beliefs. In the interests of getting a handle on the varieties of scepticism, one can locate four different milestones of sceptical thought in the history of Western philosophy. These four milestones start with the least threatening of them, Pyrrhonian skepticism, and continue by Cartesian and Kantian scepticisms to the Wittgensteinian moment in which even our intention to act is put in question…

Fascinating: “Known unknowables,” in @aeonmag.

* Pyrrho (as paraphrased by Nigel Warburton)


As we question questioning, we might spare a thought for a not-so-sceptical thinker, Thomas Carlyle; he died on this date in 1881.  A Victorian polymath, he was an accomplished philosopher, satirical writer, essayist, translator, historian, mathematician, and teacher.  While he was an enormously popular lecturer in his time, and his contributions to mathematics earned him eponymous fame (the Carlyle circle), he may be best remembered as a historian (and champion of the “Great Man” theory of history)… and as the coiner of phrases like “the dismal science” (to describe economics).

While not adhering to any formal religion, Carlyle asserted the importance of belief and developed his own philosophy of religion. He preached “Natural Supernaturalism,” the idea that all things are “Clothes” which at once reveal and conceal the divine, that “a mystic bond of brotherhood makes all men one,” and that duty, work and silence are essential. He attacked utilitarianism as mere atheism and egoism; instead taking a medievalist tack, postulating the Great Man theory, a philosophy of history which argues that history is shaped by exceptional individuals. (Indeed his thinking, which extended to a critique of democracy and an argument for “Heroarchy (Government of Heroes),” was appropriated and perverted by Nazi thinkers in Germany.

Carlyle’s History of the French Revolution, a three volume work that assured his fame as a historian, was finished in 1836 but not published until 1837 because John Stuart Mill’s maid mistook the manuscript of Volume One for kindling.  The setback prompted Carlyle to compare himself to a man who has nearly killed himself accomplishing zero.”  But he re-wrote the first volume from scratch.

“A well-written Life is almost as rare as a well-spent one.”   – Thomas Carlyle


Written by (Roughly) Daily

February 5, 2023 at 1:00 am

“I love to talk about nothing. It’s the only thing I know anything about.”*…

Joy Walker: Three Boxes Three Ways, 2009

Nonbeing belongs to that category of concepts that seem self-evident and self-explanatory, but as FT explains, it has perplexed philosophers for ages…

Bertrand Russell’s 1951 obituary for Ludwig Wittgenstein is only a few paragraphs long, and the second consists largely of a single pointed anecdote:

Quite at first I was in doubt as to whether he was a man of genius or a crank…. He maintained, for example, at one time that all existential propositions are meaningless. This was in a lecture room, and I invited him to consider the proposition: “There is no hippopotamus in this room at present.” When he refused to believe this, I looked under all the desks without finding one; but he remained unconvinced.

The exchange is typical of the two philosophers’ relationship: Russell’s proper British demeanor was frequently ruffled by the Austrian’s dry humor. But it also illustrates two general approaches to philosophy: one that takes pleasure in complexities, absurdities, and ironies, and one that takes pleasure in resolving them. Just as Wittgenstein surely realized that there was no hippopotamus in the room, Russell surely realized that Wittgenstein’s objection could not be dispelled empirically by looking under each desk. At stake was not a fact of perception but the epistemological status of negation—the philosophical meaning and value of assertions about nothing.

Nothing, or nonbeing, belongs to that category of concepts—like being, space, and consciousness—that seem self-evident and self-explanatory to most people most of the time, but that for philosophy have been objects of deepest perplexity and millennia-long dispute. It’s a little like breathing, which happens automatically until we stop to think about it. To most of us, Russell’s statement “There is no hippopotamus in this room” is both easily understood and easily verified. We think we know what it means, and most of us would only need a quick look around to affirm or deny the proposition.

But here our troubles begin. If you look around the room and don’t see a hippopotamus, presumably you do still see something: some kind of perception or sensory data is reaching your consciousness, which allows you to make a judgment. What is it that you do see when you see a hippopotamus not being there? Are you perceiving a nonbeing, seeing a particular thing whose nature is absence, or are you not perceiving any being, seeing no “thing” at all? When you see a hippopotamus not being there, are you also seeing a whale and a lion and a zebra not being there? Is every room full of all the things that aren’t in it?

From an evolutionary perspective, one predator not being there is just as good as any other predator not being there, but dialectics and logic are a little more particular. If every possible animal is not there at the same time, what specific truth-value can the assertion “There is no hippopotamus in this room” possibly have? Hence Wittgenstein’s insistence, facetious or not, that all existential propositions are meaningless. In this manner the complications and implications of nothing spill into every area of philosophical inquiry, and we quickly come to sympathize with Aristophanes’ brutal satire of philosophers in The Clouds:

Socrates: Have you got hold of anything?

Strepsiades: No, nothing whatever.

Socrates: Nothing at all?

Strepsiades: No, nothing except my tool, which I’ve got in my hand.

Nonbeing, through the ages: “Apropos of Nothing,” from @ft_variations in @nybooks.

* Oscar Wilde


As we analyze absence, we might send mindful birthday greetings to Mahasi Sayadaw; he was born on this date in 1904. A Burmese Theravada Buddhist monk and meditation master, he had a significant impact on the teaching of vipassanā (insight) meditation in the West and throughout Asia.


“A nothing will serve just as well as a something about which nothing could be said”*…

Metaphysical debates in quantum physics don’t get at “truth,” physicist and mathematician Timothy Andersen argues; they’re nothing but a form of ritual activity and culture. After a thoughtful intellectual history of both quantum mechanics and Wittgenstein’s thought, he concludes…

If Wittgenstein were alive today, he might have couched his arguments in the vocabulary of cultural anthropology. For this shared grammar and these language games, in his view, form part of much larger ritualistic mechanisms that connect human activity with human knowledge, as deeply as DNA connects to human biology. It is also a perfect example of how evolution works by using pre-existing mechanisms to generate new behaviors.

The conclusion from all of this is that interpretation and representation in language and mathematics are little different than the supernatural explanations of ancient religions. Trying to resolve the debate between Bohr and Einstein is like trying to answer the Zen kōan about whether the tree falling in the forest makes a sound if no one can hear it. One cannot say definitely yes or no, because all human language must connect to human activity. And all human language and activity are ritual, signifying meaning by their interconnectedness. To ask what the wavefunction means without specifying an activity – and experiment – to extract that meaning is, therefore, as sensible as asking about the sound of the falling tree. It is nonsense.

As a scientist and mathematician, Wittgenstein has challenged my own tendency to seek out interpretations of phenomena that have no scientific value – and to see such explanations as nothing more than narratives. He taught that all that philosophy can do is remind us of what is evidently true. It’s evidently true that the wavefunction has a multiverse interpretation, but one must assume the multiverse first, since it cannot be measured. So the interpretation is a tautology, not a discovery.

I have humbled myself to the fact that we can’t justify clinging to one interpretation of reality over another. In place of my early enthusiastic Platonism, I have come to think of the world not as one filled with sharply defined truths, but rather as a place containing myriad possibilities – each of which, like the possibilities within the wavefunction itself, can be simultaneously true. Likewise, mathematics and its surrounding language don’t represent reality so much as serve as a trusty tool for helping people to navigate the world. They are of human origin and for human purposes.

To shut up and calculate, then, recognizes that there are limits to our pathways for understanding. Our only option as scientists is to look, predict and test. This might not be as glamorous an offering as the interpretations we can construct in our minds, but it is the royal road to real knowledge…

A provocative proposition: “Quantum Wittgenstein,” from @timcopia in @aeonmag.

* Ludwig Wittgenstein, Philosophical Investigations


As we muse on meaning, we might recall that it was on this date in 1954 that the official ground-breaking for CERN (Conseil européen pour la recherche nucléaire) was held. Located in Switzerland, it is the largest particle physics laboratory in the world… that’s to say, a prime spot to do the observation and calculation that Andersen suggests. Indeed, it’s been the site of many breakthrough discoveries over the years, maybe most notably the 2012 observation of the Higgs Boson.

Because researchers need remote access to these facilities, the lab has historically been a major wide area network hub. Indeed, it was at CERN that Tim Berners-Lee developed the first “browser”– and effectively fomented the emergence of the web.

CERN’s main site, from Switzerland looking towards France
%d bloggers like this: