(Roughly) Daily

Posts Tagged ‘philosophy

“Speed and acceleration are merely the dream of making time reversible”*…

In the early 20th century, there was Futurism…

The Italian Futurists, from the first half of the twentieth century… wanted to drive modernisation in turn-of-the-century Italy at a much faster pace. They saw the potential in machines, and technology, to transform the country, to demand progress. It was not however merely an incrementalist approach they were after: words like annihilation, destruction and apocalypse appear in the writings of the futurists, including the author of The Futurist Manifesto, Filippo Tomasso Marinetti. ‘We want to glorify war – the only cure for the world…’ Marinetti proclaimed – this was not for the faint hearted! That same Marinetti was the founder of the Partito Politico Futuristo in 1918, which became part of Mussolini’s Fascist party in 1919. Things did not go well after that.

Beautiful Ideas Which Kill: Accelerationism, Futurism and Bewilderment

And now, in the early 21st century, there is Accelerationism…

These [politically-motivated mass] killings were often linked to the alt-right, described as an outgrowth of the movement’s rise in the Trump era. But many of these suspected killers, from Atomwaffen thugs to the New Zealand mosque shooter to the Poway synagogue attacker, are more tightly connected to a newer and more radical white supremacist ideology, one that dismisses the alt-right as cowards unwilling to take matters into their own hands.

It’s called “accelerationism,” and it rests on the idea that Western governments are irreparably corrupt. As a result, the best thing white supremacists can do is accelerate their demise by sowing chaos and creating political tension. Accelerationist ideas have been cited in mass shooters’ manifestos — explicitly, in the case of the New Zealand killer — and are frequently referenced in white supremacist web forums and chat rooms.

Accelerationists reject any effort to seize political power through the ballot box, dismissing the alt-right’s attempts to engage in mass politics as pointless. If one votes, one should vote for the most extreme candidate, left or right, to intensify points of political and social conflict within Western societies. Their preferred tactic for heightening these contradictions, however, is not voting, but violence — attacking racial minorities and Jews as a way of bringing us closer to a race war, and using firearms to spark divisive fights over gun control. The ultimate goal is to collapse the government itself; they hope for a white-dominated future after that…

Accelerationism: the obscure idea inspiring white supremacist killers around the world” (and source of the image above)

See also: “A Year After January 6, Is Accelerationism the New Terrorist Threat?

For a look at the “intellectual” roots of accelerationism, see “Accelerationism: how a fringe philosophy predicted the future we live in.”

For a powerful articulation of the dangers of Futurism (and even more, Acclerationism), see “The Perils of Smashing the Past.”

And for a reminder of the not-so-obvious ways that movements like these live on, see “The Intentionally Scandalous 1932 Cookbook That Stands the Test of Time,” on The Futurist Cookbook, by Futurist Manifesto author Filippo Tommaso Marinetti… which foreshadowed the “food as fuel” culinary movements that we see today.

* Jean Baudrillard

###

As we slow down, we might send a “Alles Gute zum Geburtstag” to the polymathic Gottfried Wilhelm Leibniz, the philosopher, mathematician, and political adviser, who was important both as a metaphysician and as a logician, but who is probably best remembered for his independent invention of the calculus; he was born on this date in 1646.  Leibniz discovered and developed differential and integral calculus on his own, which he published in 1684; but he became involved in a bitter priority dispute with Isaac Newton, whose ideas on the calculus were developed earlier (1665), but published later (1687).

As it happens, Leibnitz was a wry and incisive political and cultural observer.  Consider, e.g…

If geometry conflicted with our passions and our present concerns as much as morality does, we would dispute it and transgress it almost as much–in spite of all Euclid’s and Archimedes’ demonstrations, which would be treated as fantasies and deemed to be full of fallacies. [Leibniz, New Essays, p. 95]

28134677537_d79a889e6a_o

 source

“Philosophy is a battle against the bewitchment of our intelligence by means of language”*…

Clockwise from top: Iris Murdoch, Philippa Foot, Mary Midgley, Elizabeth Anscombe

How four women defended ethical thought from the legacy of positivism…

By Michaelmas Term 1939, mere weeks after the United Kingdom had declared war on Nazi Germany, Oxford University had begun a change that would wholly transform it by the academic year’s end. Men ages twenty and twenty-one, save conscientious objectors and those deemed physically unfit, were being called up, and many others just a bit older volunteered to serve. Women had been able to matriculate and take degrees at the university since 1920, but members of the then all-male Congregation had voted to restrict the number of women to fewer than a quarter of the overall student population. Things changed rapidly after the onset of war. The proportion of women shot up, and, in many classes, there were newly as many women as men.

Among the women who experienced these novel conditions were several who did courses in philosophy and went on to strikingly successful intellectual careers. Elizabeth Anscombe, noted philosopher and Catholic moral thinker who would go on occupy the chair in philosophy that Ludwig Wittgenstein had held at Cambridge, started a course in Greats—roughly, classics and philosophy—in 1937, as did Jean Austin (neé Coutts), who would marry philosopher J. L. Austin and later have a long teaching career at Oxford. Iris Murdoch, admired and beloved philosopher and novelist, began to read Greats in 1938 at the same time as Mary Midgley (neé Scrutton), who became a prominent public philosopher and animal ethicist. A year later Philippa Foot (neé Bosanquet), distinguished moral philosopher, started to read the then relatively new course PPE—philosophy, politics and economics—and three years after that Mary Warnock (neé Wilson), subsequently a high-profile educator and public intellectual, went up to read Greats.

Several of these women would go on to make groundbreaking contributions to ethics…

Oxford philosophy in the early to mid 1930s had been in upheaval. The strains of Hegel-inspired idealism that had remained influential in Britain through the first decade of the twentieth century had been definitively displaced, in the years before World War I, by realist doctrines which claimed that knowledge must be of what is independent of the knower, and which were elaborated within ethics into forms of intuitionism. By the ’30s, these schools of thought were themselves threatened by new waves of enthusiasm for the themes of logical positivism developed by a group of philosophers and scientists, led by Moritz Schlick, familiarly known as the Vienna Circle. Cambridge University’s Susan Stebbing, the first woman to be appointed to a full professorship in philosophy in the UK, had already interacted professionally with Schlick in England and had championed tenets of logical positivism in essays and public lectures when, in 1933, Oxford don Gilbert Ryle recommended that his promising tutee Freddie Ayer make a trip to Vienna. Ayer obliged, and upon his return he wrote a brief manifesto, Language, Truth and Logic (1936), in defense of some of the Vienna Circle’s views. The book became a sensation, attracting attention and debate far beyond the halls of academic philosophy. Its bombshell contention was that only two kinds statements are meaningful: those that are true solely in virtue of the meanings of their constituent terms (such as “all bachelors are unmarried”), and those that can be verified through physical observation. The gesture seemed to consign to nonsense, at one fell swoop, the statements of metaphysics, theology, and ethics.

This turn to “verification” struck some as a fitting response to strains of European metaphysics that many people, rightly or wrongly, associated with fascist irrationalism and the gathering threat of war. But not everyone at Oxford was sympathetic. Although Ayer’s ideas weren’t universally admired, they were widely discussed, including by a group of philosophers led by Isaiah Berlin, who met regularly at All Souls College—among them, J. L. Austin, Stuart Hampshire, Donald MacKinnon, Donald MacNabb, Anthony Woozley, and Ayer himself. Oxford philosophy’s encounter with logical positivism would have a lasting impact and would substantially set the terms for subsequent research in many areas of philosophy—including, it would turn out, ethics and political theory…

A fascinating intellectual history of British moral philosophy in the second half of the 20th century: “Metaphysics and Morals,” Alice Crary in @BostonReview.

* Ludwig Wittgenstein

###

As we ponder precepts, we might recall that it was on this date in 1248 that the seat of the action described above, The University of Oxford, received its Royal Charter from King Henry III.   While it has no known date of foundation, there is evidence of teaching as far back as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation (after the University of Bologna).

The university operates the world’s oldest university museum, as well as the largest university press in the world, and the largest academic library system in Britain.  Oxford has educated and/or employed many notables, including 72 Nobel laureates, 4 Fields Medalists, and 6 Turing Award winners, 27 prime ministers of the United Kingdom, and many heads of state and government around the world. 

42749697282_7a6203784e_o

 source

“Nothing in life is certain except death, taxes and the second law of thermodynamics”*…

The second law of thermodynamics– asserting that the entropy of a system increases with time– is among the most sacred in all of science, but it has always rested on 19th century arguments about probability. As Philip Ball reports, new thinking traces its true source to the flows of quantum information…

In all of physical law, there’s arguably no principle more sacrosanct than the second law of thermodynamics — the notion that entropy, a measure of disorder, will always stay the same or increase. “If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations,” wrote the British astrophysicist Arthur Eddington in his 1928 book The Nature of the Physical World. “If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” No violation of this law has ever been observed, nor is any expected.

But something about the second law troubles physicists. Some are not convinced that we understand it properly or that its foundations are firm. Although it’s called a law, it’s usually regarded as merely probabilistic: It stipulates that the outcome of any process will be the most probable one (which effectively means the outcome is inevitable given the numbers involved).

Yet physicists don’t just want descriptions of what will probably happen. “We like laws of physics to be exact,” said the physicist Chiara Marletto of the University of Oxford. Can the second law be tightened up into more than just a statement of likelihoods?

A number of independent groups appear to have done just that. They may have woven the second law out of the fundamental principles of quantum mechanics — which, some suspect, have directionality and irreversibility built into them at the deepest level. According to this view, the second law comes about not because of classical probabilities but because of quantum effects such as entanglement. It arises from the ways in which quantum systems share information, and from cornerstone quantum principles that decree what is allowed to happen and what is not. In this telling, an increase in entropy is not just the most likely outcome of change. It is a logical consequence of the most fundamental resource that we know of — the quantum resource of information…

Is that most sacrosanct natural laws, second law of thermodynamics, a quantum phenomenon? “Physicists Rewrite the Fundamental Law That Leads to Disorder,” from @philipcball in @QuantaMagazine.

* “Nothing in life is certain except death, taxes and the second law of thermodynamics. All three are processes in which useful or accessible forms of some quantity, such as energy or money, are transformed into useless, inaccessible forms of the same quantity. That is not to say that these three processes don’t have fringe benefits: taxes pay for roads and schools; the second law of thermodynamics drives cars, computers and metabolism; and death, at the very least, opens up tenured faculty positions.” — Seth Lloyd

###

As we get down with disorder, we might spare a thought for Francois-Marie Arouet, better known as Voltaire; he died on this date in 1778.  The Father of the Age of Reason, he produced works in almost every literary form: plays, poems, novels, essays, and historical and scientific works– more than 2,000 books and pamphlets (and more than 20,000 letters).  He popularized Isaac Newton’s work in France by arranging a translation of Principia Mathematica to which he added his own commentary.

A social reformer, Voltaire used satire to criticize the intolerance, religious dogma, and oligopolistic privilege of his day, perhaps nowhere more sardonically than in Candide.

 source

“If we are to prevent megatechnics from further controlling and deforming every aspect of human culture, we shall be able to do so only with the aid of a radically different model derived directly, not from machines, but from living organisms and organic complexes (ecosystems)”*…

In a riff on Lewis Mumford, the redoubtable L. M. Sacasas addresses the unraveling of modernity…

The myth of the machine underlies a set of three related and interlocking presumptions which characterized modernity: objectivity, impartiality, and neutrality. More specifically, the presumptions that we could have objectively secured knowledge, impartial political and legal institutions, and technologies that were essentially neutral tools but which were ordinarily beneficent. The last of these appears to stand somewhat apart from the first two in that it refers to material culture rather than to what might be taken as more abstract intellectual or moral stances. In truth, however, they are closely related. The more abstract intellectual and institutional pursuits were always sustained by a material infrastructure, and, more importantly, the machine supplied a master template for the organization of human affairs.

Just as the modern story began with the quest for objectively secured knowledge, this ideal may have been the first to lose its implicit plausibility. Since the late 19th century onward, philosophers, physicists, sociologists, anthropologists, psychologists, and historians have, among others, proposed a more complex picture that emphasized the subjective, limited, contingent, situated, and even irrational dimensions of how humans come to know the world. The ideal of objectively secured knowledge became increasingly questionable throughout the 20th century. Some of these trends get folded under the label “postmodernism,” but I found the term unhelpful at best a decade ago—now find it altogether useless.

We can similarly trace a growing disillusionment with the ostensible impartiality of modern institutions. This takes at least two forms. On the one hand, we might consider the frustrating and demoralizing character of modern bureaucracies, which we can describe as rule-based machines designed to outsource judgement and enhance efficiency. On the other, we can note the heightened awareness of the actual failures of modern institutions to live up to the ideals of impartiality, which has been, in part, a function of the digital information ecosystem.

But while faith in the possibility of objectively secured knowledge and impartial institutions faltered, the myth of the machine persisted in the presumption that technology itself was fundamentally neutral. Until very recently, that is. Or so it seems. And my thesis (always for disputation) is that the collapse of this last manifestation of the myth brings the whole house down. This in part because of how much work the presumption of technological neutrality was doing all along to hold American society together. (International readers: as always read with a view to your own setting. I suspect there are some areas of broad overlap and other instances when my analysis won’t travel well). Already by the late 19th century, progress had become synonymous with technological advancements, as Leo Marx argued. If social, political, or moral progress stalled, then at least the advance of technology could be counted on…

But over the last several years, the plausibility of this last and also archetypal manifestation of the myth of the machine has also waned. Not altogether, to be sure, but in important and influential segments of society and throughout a wide cross-section of society, too. One can perhaps see the shift most clearly in the public discourse about social media and smart phones, but this may be a symptom of a larger disillusionment with technology. And not only technological artifacts and systems, but also with the technocratic ethos and the public role of expertise.

If the myth of the machine in these three manifestations, was, in fact, a critical element of the culture of modernity, underpinning its aspirations, then when each in turn becomes increasingly implausible the modern world order comes apart. I’d say that this is more or less where we’re at. You could usefully analyze any number of cultural fault lines through this lens. The center, which may not in fact hold, is where you find those who still operate as if the presumptions of objectivity, impartiality, and neutrality still compelled broad cultural assent, and they are now assailed from both the left and the right by those who have grown suspicious or altogether scornful of such presumptions. Indeed, the left/right distinction may be less helpful than the distinction between those who uphold some combination of the values of objectivity, impartiality, and neutrality and those who no longer find them compelling or desirable.

What happens when the systems and strategies deployed to channel often violent clashes within a population deeply, possibly intractably divided about substantive moral goods and now even about what Arendt characterized as the publicly accessible facts upon which competing opinions could be grounded—what happens when these systems and strategies fail?

It is possible to argue that they failed long ago, but the failure was veiled by an unevenly distributed wave of material abundance. Citizens became consumers and, by and large, made peace with the exchange. After all, if the machinery of government could run of its own accord, what was their left to do but enjoy the fruits of prosperity. But what if abundance was an unsustainable solution, either because it taxed the earth at too high a rate or because it was purchased at the cost of other values such as rootedness, meaningful work and involvement in civic life, abiding friendships, personal autonomy, and participation in rich communities of mutual care and support? Perhaps in the framing of that question, I’ve tipped my hand about what might be the path forward.

At the heart of technological modernity there was the desire—sometimes veiled, often explicit—to overcome the human condition. The myth of the machine concealed an anti-human logic: if the problem is the failure of the human to conform to the pattern of the machine, then bend the human to the shape of the machine or eliminate the human altogether. The slogan of the one of the high-modernist world’s fairs of the 1930s comes to mind: “Science Finds, Industry Applies, Man Conforms.” What is now being discovered in some quarters, however, is that the human is never quite eliminated, only diminished…

Eminently worth reading in full: “The Myth of the Machine, ” from @LMSacasas.

For a deep dive into similar waters, see John Ralston Saul‘s (@JohnRalstonSaul) Voltaire’s Bastards.

[Image above: source]

* Lewis Mumford, The Myth of the Machine

###

As we rethink rudiments, we might recall that it was on this date in 1919 that Arthur Eddington confirmed Einstein’s light-bending prediction– a part of The Theory of General Relativity– using photos of a solar eclipse. Eddington’s paper the following year was the “debut” of Einstein’s theoretical work in most of the English-speaking world (and occasioned an urban legend: when a reporter supposedly suggested that “only three people understand relativity,” Eddington was supposed to have jokingly replied “Oh, who’s the third?”)

One of Eddington’s photographs of the total solar eclipse of 29 May 1919, presented in his 1920 paper announcing its success, confirming Einstein’s theory that light “bends”

“Oh how wrong we were to think immortality meant never dying”*…

Mourir C’est Renaitre (Death and Immortality), Copy after William Blake

The John Templeton Foundation has undertaken a undertaken a deep investigation into the biology, philosophy, and theology of immortality research. Lorraine Boissoneault offers the first in a series of reports on their work…

Around 100,000 years ago, humans living in the region that would come to be called “Israel” did something remarkable. When members of the community died, those left behind buried the dead in a cave, placing some of the bodies with great care and arranging them near colorful pigments and shells. Although burial is so common today as to be almost unremarkable, for ancient humans to exhibit such behavior suggested a major development in cultural practices. The Qafzeh Cave is one of the oldest examples that humans understand death differently than many other creatures. We seem to have an innate desire to mark it with ritual.

It is an unavoidable fact of biology that all organisms die, whether by disease, disaster, or simply old age. Yet our species, Homo sapiens, seems to be the only creature blessed—or cursed—with the cognitive ability to understand our mortality. And thanks to our powerful intelligence, we’re also the only beings to imagine and seek out death’s opposite: immortality. 

In religious traditions, spiritual afterlives and reincarnation offer continuation of the self beyond death. In myth and legend, sources of everlasting life abound, from the Fountain of Youth to elixirs of life. Some people seek symbolic immortality through procreation. Others aim for contributions to society, whether artistic, academic or scientific. And still others have pushed the bounds of technology in search of dramatic life extension or a digital self. 

Where does this impulse come from?…

Find out: “Pre-life, Afterlife, and the Drive for Immortality,” from @boissolm @templeton_fdn.

Gerard Way

###

As we internalize eternity, we might recall that it was on this date in 1826 that the HMS Beagle set sail from Plymouth on its first voyage, an expedition to conduct a hydrographic survey of Patagonia and Tierra del Fuego in support of the larger ship HMS Adventure.

The Beagle‘s second voyage (1831-1836) is rather better remembered, as it was on that expedition that the ship’s naturalist, a young Charles Darwin (whose published journal of the journey, quoted above, earned him early fame as a writer) made the observations that led him to even greater fame for his theory of evolution.

300px-PSM_V57_D097_Hms_beagle_in_the_straits_of_magellan

source

%d bloggers like this: