(Roughly) Daily

Posts Tagged ‘moral philosophy

“Not to extinguish our free will, I hold it to be true that Fortune is the arbiter of one-half of our actions, but that she still leaves us to direct the other half”*…

Detail from The Threads of Destiny (Los Hilos del Destino), 1957, by Remedios Varo (1908–1963);

Further to an earlier post about the latest installment of an age old debate– the “dialogue” on free will vs. Determinism between Robert Sapolsky (determinist) and Kevin Mitchell (champion of free will)– the (remarkable) George Scialabba weighs in…

In 1884, William James began his celebrated essay “The Dilemma of Determinism” by begging his readers’ indulgence: “A common opinion prevails that the juice has ages ago been pressed out of the free-will controversy, and that no new champion can do more than warm up stale arguments which everyone has heard.” James persisted and rendered the subject very juicy, as he always did. But if the topic appeared exhausted to most people then, surely a hundred and forty years later there can’t be anything new to say. Whole new fields of physics, biology, mathematics, and medicine have been invented—surely this ancient philosophical question doesn’t still interest anyone?

Indeed, it does; it retains for many what James called “the most momentous importance.” Like other hardy perennials—the objectivity of “good”; the universality of truth; the existence of human nature and its telos—it continues to fascinate philosophers and laypersons, who agree only that the stakes are enormous: “our very humanity,” many of them insist.

Why so momentous? Skepticism about free will is said to produce two disastrous but opposed states of mind. The first is apathy: We are bound to be so demoralized by the conviction that nothing is up to us, that we are not the captains of our fate, that we need no longer get out of bed. The other is frenzy: We will be so exhilarated by our liberation from responsibility and guilt that we will run amok, like Dostoevsky’s imagined atheist, who concludes that if God does not exist, everything is permitted.

Note that it is not the absence of free will but only the absence of belief in free will that is said to have these baneful effects. People who never give the subject a thought are neither apathetic nor frenetic, at least not for these reasons. Should we just stop thinking about the whole question?

For twenty-five hundred years, no generation has succeeded in doing that: So we may as well wade in. What is free will? It is the capacity to make uncaused choices. This does not mean that nothing causes my choice—it means that I do. But surely something has caused me to be the person who makes that choice. And doesn’t whatever causes me to be the person I am also cause the choices I make?…

[Scialabba succinctly explicates Sapolsky’s and Mitchell’s (each, estimable) arguments…]

… But are beliefs about free will really the point here? Judges, whether or not they believe in free will, should take more cognizance of mitigating circumstances than they do now. A baby damaged by prenatal cocaine exposure who grows up to be an addict and petty thief deserves mercy; a billionaire whose tax evasion robs his fellow citizens of tens of millions of dollars deserves none. But no philosophical convictions are needed to arrive at these conclusions, only humanity and good sense.

And whether or not we have free will, isn’t punishment also justified as deterrence? Surely, the prospect of a long stretch in prison (or quarantine) would give pause to at least some murderers, rapists, and persons scheming to overturn a fair presidential election? And beyond that, punishment serves as a public affirmation of the values of a family or society. We are embodied beings: Values cannot only be preached; they must sometimes be enforced.

At a certain point, one may ask, what is really at stake in this debate? Sapolsky appears to harbor no metaphysical designs on readers; he spins his intricate, ingenious causal webs only, in the end, to enlarge our sympathy for life’s failures. Mitchell does seem to have a humanity-affirming philosophical agenda. “You are the type of thing that can take action, that can make decisions, that can be a causal force in the world: You are an agent,” he often reminds the reader, implying that these are things a scientific materialist must, in strict logic, deny. But I strongly doubt that any scientific materialist anywhere in the multiverse would deny that she can take action, make decisions, or be a causal force, or that she is an agent, or does things for reasons. She might, though, think that all her choices are caused, which, Sapolsky would say, is perfectly compatible with taking actions, making decisions, being a causal force, or acting for reasons. Elsewhere, Mitchell warns readers not to believe anyone (presumably the insidious scientific materialist) who suggests that we are merely “a collection of atoms pushed around by the laws of physics.” To which our scientific materialist might reply that we are indeed very highly organized collections of atoms, molecules, nerves, muscles, and hundreds of other components, pushed and pulled by the laws of physics, chemistry, biology, neuroscience, psychology, sociology, economics, and politics, along with intimations from philosophy, history, and art, and constantly adjusting to and modifying those influences from a center that is provisionally but not permanently stable. This, she would say, is how one can be an agent without free will.

With what I hope is due deference, I humbly disagree with both Sapolsky and Mitchell, and even with my deeply revered William James. Perhaps the question of free will is not so momentous. Philosophers have been debating about it for thousands of years, Mitchell observes. “That these debates continue today with unabated fervor tells you that they have not yet resolved the issue.” Indeed, they haven’t. Perhaps they should take a break. Perhaps it is a controversy without consequences. Perhaps whether we are free or fated, morality and politics, science and medicine, art and literature will all go their merry or melancholy ways, unaffected.

Notwithstanding Sapolsky’s hopes and Mitchell’s fears, whatever we decide about free will, the world—even the moral world—will look the same afterward as before. This, along with our millennia-long failure to make appreciable, or any, progress toward an answer, suggests that we are in the presence of a pseudoproblem. James himself, in “The Will to Believe,” written a dozen years after he defended free will in “The Dilemma of Determinism,” conceded that “free will and simple wishing do seem, in the matter of our credences, to be only fifth wheels to the coach.” The moral and political worlds run—to the extent they run at all—on generosity and imagination, mother wit and sympathetic understanding. These can answer all our questions about moral responsibility and moral obligation without our having to solve the insoluble conundrums of free will.

A new round in an old debate: “Free at Last?,” from @hedgehogreview.

* Niccolò Machiavelli, The Prince

###

As we wrestle with responsibility, we might spare a thought for Henri-Louis Bergson; he died on this date in 1941.  A philosopher especially influential in the first half of the 20th Century, Bergson convinced many of the primacy of immediate experience and intuition over rationalism and science for the understanding of reality…. many, but not Wittgenstein, Russell, Moore, nor Santayana, who thought that he willfully misunderstood the scientific method in order to justify his “projection of subjectivity onto the physical world.”  Still, in 1927 Bergson won the Nobel Prize (in Literature); and in 1930, received France’s highest honor, the Grand-Croix de la Legion d’honneur.

Bergson’s influence waned mightily later in the century.  To the extent that there’s been a bit of a resurgence of interest, it’s largely the result, in philosophical circles, of Gilles Deleuze’s appropriation of Bergson’s concept of “mulitplicity” and his treatment of duration, which Deleuze used in his critique of Hegel’s dialectic, and in the religious and spiritualist studies communities, of Bergson’s seeming embrace of the concept of an overriding/underlying consciousness in which humans participate.

Indeed, Time and Free Will: An Essay on the Immediate Data of Consciousness, Bergson’s doctoral thesis, first published in 1889, dealt explicitly with the question we’re considering, which Bergson argued is merely a common confusion among philosophers caused by an illegitimate translation of the unextended into the extended– the introduction of his theory of duration.

 source

“What’s so great about The Trolley Problem is that there is no right answer”*…

Long time readers will know of your correspondent’s fascination with the Trolley Problem (a thought experiment in ethics and psychology introduced in 1967 by Philippa Foot)– see here and especially here.

Should you pull the lever and sacrifice one person to save five?

Today, another variation…

From @ShitpostGate via @RogersBacon1.

* Chidi, The Good Place

###

As we ponder the imponderable, we might send over confident birthday greetings to Hastings Rashdall; he was born on this date in 1858. A philosopher, theologian, historian, and Anglican priest, he was a proponent of  ideal utilitarianism— and so would likely have had a quick and confident answer to the problem.

source

Written by (Roughly) Daily

June 24, 2023 at 1:00 am

“The only advantage of not being too good a housekeeper is that your guests are so pleased to feel how very much better they are”*…

Roomba is on the rise, but is the humble carpet sweeper poised for a rebound? Edward Tenner considers…

Every so often technology critics charge that despite the exponential growth of computer power, the postwar dreams of automated living have been stalled. It is true that jetpacks are unlikely to go mainstream, and that fully autonomous vehicles are more distant than they appear, at least on local roads. And the new materials that promised what the historian of technology Jeffrey L. Meikle has called
“damp-cloth utopianism”—the vision of a future household where plastic-covered furnishings would allow carefree cleaning—have created dystopia in the world’s oceans.

Yet a more innocent dream, the household robot, has come far closer to reality: not, it is true, the anthropomorphic mechanical butler of science-fiction films, but a humbler machine that is still
impressive, the autonomous robotic vacuum cleaner. Consider, for example, the Roomba®. Twenty years after introducing the first model, the manufacturer, iRobot, sold itself to Amazon in August 2022 for
approximately $1.7 billion in cash. Since 2013, a unit has been part of the permanent collection of the Smithsonian’s National Museum of American History.

As the museum site notes, the first models found their way by bumping into furniture, walls, and other obstacles. They could not be programmed to stay out of areas of the home; an infrared-emitting
accessory was needed to create a “virtual wall.” Like smartphones, introduced a few years later, Roombas have acquired new features steadily with a new generation on average every year. (They have also inspired a range of products from rival manufacturers.) Over 35 million units have been sold. According to Fortune Business Insights Inc., the worldwide market was nearly $10 billion in 2020 and is estimated to increase from almost $12 billion in 2021 to $50.65 billion in 2028.

Adam Smith might applaud the Roomba as a triumph of the liberal world order he had endorsed. Thanks to the global market- place for design ideas, chips, and mechanical parts, he might remark,
a division of labor—Roomba is designed mainly in the United States by an international team and manufactured in China and Malaysia—has benefited consumers worldwide. Smith would nonetheless
disapprove of the economic nationalism of both the United States and China that has made managing high-technology manufacturing chains so challenging.

Yet Smith might also make a different kind of observation, high-lighting the technology’s limits rather than its capabilities…

Yet Smith might also make a different kind of observation, high-lighting the technology’s limits rather than its capabilities… Could household automation be not only irrelevant to fundamental human welfare, but harmful? As an omnivorous reader, Smith would no doubt discover in our medical literature the well-established dangers of sedentary living (he loved “long solitary walks by the Sea side”) and the virtues of getting up regularly to perform minor chores, such as turning lights on and off, adjusting the thermostat, and vacuuming the room, the same sorts of fidgeting that the Roomba and the entire Internet of Things are hailed as replacing. In fact the very speed of improvement of robotic vacuums may be a hazard in itself, as obsolescent models add to the accumulation of used batteries and environmentally hazardous electronic waste.

As the sustainability movement grows, there are signs of a revival of the humble carpet sweeper, invented in 1876, as sold by legacy brands like Fuller Brush and Bissell. They offer recycled plastic parts, independence of the electric grid, and freedom from worry about hackers downloading users’ home layouts from the robots’ increasingly sophisticated cloud storage…

Via the estimable Alan Jacobs and his wonderful Snakes and Ladders: “Adam Smith and the Roomba®” from @edward_tenner.

(Image above: source)

* Eleanor Roosevelt

###

As we get next to godliness, we might spare a thought for Waldo Semon; he died on this date in 1999. An inventor with over 100 patents, he is best known as the creator of “plasticized PVC” (or vinyl). The the world’s third most used plastic, vinyl is employed in imitation leather, garden hose, shower curtains, and coatings– but most frequently of all, in flooring tiles.

For his accomplishments, Semon was inducted into the Invention Hall of Fame in 1995 at the age of 97.

source

Written by (Roughly) Daily

May 26, 2023 at 1:00 am

“Philosophy is a battle against the bewitchment of our intelligence by means of language”*…

Clockwise from top: Iris Murdoch, Philippa Foot, Mary Midgley, Elizabeth Anscombe

How four women defended ethical thought from the legacy of positivism…

By Michaelmas Term 1939, mere weeks after the United Kingdom had declared war on Nazi Germany, Oxford University had begun a change that would wholly transform it by the academic year’s end. Men ages twenty and twenty-one, save conscientious objectors and those deemed physically unfit, were being called up, and many others just a bit older volunteered to serve. Women had been able to matriculate and take degrees at the university since 1920, but members of the then all-male Congregation had voted to restrict the number of women to fewer than a quarter of the overall student population. Things changed rapidly after the onset of war. The proportion of women shot up, and, in many classes, there were newly as many women as men.

Among the women who experienced these novel conditions were several who did courses in philosophy and went on to strikingly successful intellectual careers. Elizabeth Anscombe, noted philosopher and Catholic moral thinker who would go on occupy the chair in philosophy that Ludwig Wittgenstein had held at Cambridge, started a course in Greats—roughly, classics and philosophy—in 1937, as did Jean Austin (neé Coutts), who would marry philosopher J. L. Austin and later have a long teaching career at Oxford. Iris Murdoch, admired and beloved philosopher and novelist, began to read Greats in 1938 at the same time as Mary Midgley (neé Scrutton), who became a prominent public philosopher and animal ethicist. A year later Philippa Foot (neé Bosanquet), distinguished moral philosopher, started to read the then relatively new course PPE—philosophy, politics and economics—and three years after that Mary Warnock (neé Wilson), subsequently a high-profile educator and public intellectual, went up to read Greats.

Several of these women would go on to make groundbreaking contributions to ethics…

Oxford philosophy in the early to mid 1930s had been in upheaval. The strains of Hegel-inspired idealism that had remained influential in Britain through the first decade of the twentieth century had been definitively displaced, in the years before World War I, by realist doctrines which claimed that knowledge must be of what is independent of the knower, and which were elaborated within ethics into forms of intuitionism. By the ’30s, these schools of thought were themselves threatened by new waves of enthusiasm for the themes of logical positivism developed by a group of philosophers and scientists, led by Moritz Schlick, familiarly known as the Vienna Circle. Cambridge University’s Susan Stebbing, the first woman to be appointed to a full professorship in philosophy in the UK, had already interacted professionally with Schlick in England and had championed tenets of logical positivism in essays and public lectures when, in 1933, Oxford don Gilbert Ryle recommended that his promising tutee Freddie Ayer make a trip to Vienna. Ayer obliged, and upon his return he wrote a brief manifesto, Language, Truth and Logic (1936), in defense of some of the Vienna Circle’s views. The book became a sensation, attracting attention and debate far beyond the halls of academic philosophy. Its bombshell contention was that only two kinds statements are meaningful: those that are true solely in virtue of the meanings of their constituent terms (such as “all bachelors are unmarried”), and those that can be verified through physical observation. The gesture seemed to consign to nonsense, at one fell swoop, the statements of metaphysics, theology, and ethics.

This turn to “verification” struck some as a fitting response to strains of European metaphysics that many people, rightly or wrongly, associated with fascist irrationalism and the gathering threat of war. But not everyone at Oxford was sympathetic. Although Ayer’s ideas weren’t universally admired, they were widely discussed, including by a group of philosophers led by Isaiah Berlin, who met regularly at All Souls College—among them, J. L. Austin, Stuart Hampshire, Donald MacKinnon, Donald MacNabb, Anthony Woozley, and Ayer himself. Oxford philosophy’s encounter with logical positivism would have a lasting impact and would substantially set the terms for subsequent research in many areas of philosophy—including, it would turn out, ethics and political theory…

A fascinating intellectual history of British moral philosophy in the second half of the 20th century: “Metaphysics and Morals,” Alice Crary in @BostonReview.

* Ludwig Wittgenstein

###

As we ponder precepts, we might recall that it was on this date in 1248 that the seat of the action described above, The University of Oxford, received its Royal Charter from King Henry III.   While it has no known date of foundation, there is evidence of teaching as far back as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation (after the University of Bologna).

The university operates the world’s oldest university museum, as well as the largest university press in the world, and the largest academic library system in Britain.  Oxford has educated and/or employed many notables, including 72 Nobel laureates, 4 Fields Medalists, and 6 Turing Award winners, 27 prime ministers of the United Kingdom, and many heads of state and government around the world. 

42749697282_7a6203784e_o

 source

“Reality is merely an illusion, albeit a very persistent one”*…

Objective reality has properties outside the range of our senses (and for that matter, our instruments); and studies suggest that our brains warp sensory data as soon as we collect it. So we’d do well to remember that we don’t have– and likely won’t ever have– perfect information…

Many philosophers believe objective reality exists, if “objective” means “existing as it is independently of any perception of it.” However, ideas on what that reality actually is and how much of it we can interact with vary dramatically.

Aristotle argued, in contrast to his teacher Plato, that the world we interact with is as real as it gets and that we can have knowledge of it, but he thought that the knowledge we could have about it was not quite perfect. Bishop Berkeley thought everything existed as ideas in minds — he argued against the notion of physical matter — but that there was an objective reality since everything also existed in the mind of God. Immanuel Kant, a particularly influential Enlightenment philosopher, argued that while “the thing in itself” — an object as it exists independently of being subjectively observed — is real and exists, you cannot know anything about it directly.

Today, a number of metaphysical realists maintain that external reality exists, but they also suggest that our understanding of it is an approximation that we can improve upon. There are also direct realists who argue that we can interact with the world as it is, directly. They hold that many of the things we see when we interact with objects can be objectively known, though some things, like color, are subjective traits.

While it might be granted that our knowledge of the world is not perfect and is at least sometimes subjective, that doesn’t have to mean that the physical world doesn’t exist. The trouble is how we can go about knowing anything that isn’t subjective about it if we admit that our sensory information is not perfect.

As it turns out, that is a pretty big question.

Science both points toward a reality that exists independently of how any subjective observer interacts with it and shows us how much our viewpoints can get in the way of understanding the world as it is. The question of how objective science is in the first place is also a problem — what if all we are getting is a very refined list of how things work within our subjective view of the world?

Physical experiments like the Wigner’s Friend test show that our understanding of objective reality breaks down whenever quantum mechanics gets involved, even when it is possible to run a test. On the other hand, a lot of science seems to imply that there is an objective reality about which the scientific method is pretty good at capturing information.

Evolutionary biologist and author Richard Dawkins argues:

“Science’s belief in objective truth works. Engineering technology based upon the science of objective truth, achieves results. It manages to build planes that get off the ground. It manages to send people to the moon and explore Mars with robotic vehicles on comets. Science works, science produces antibiotics, vaccines that work. So anybody who chooses to say, ‘Oh, there’s no such thing as objective truth. It’s all subjective, it’s all socially constructed.’ Tell that to a doctor, tell that to a space scientist, manifestly science works, and the view that there is no such thing as objective truth doesn’t.”

While this leans a bit into being an argument from the consequences, he has a point: Large complex systems which suppose the existence of an objective reality work very well. Any attempt to throw out the idea of objective reality still has to explain why these things work.

A middle route might be to view science as the systematic collection of subjective information in a way that allows for intersubjective agreement between people. Under this understanding, even if we cannot see the world as it is, we could get universal or near-universal intersubjective agreement about something like how fast light travels in a vacuum. This might be as good as it gets, or it could be a way to narrow down what we can know objectively. Or maybe it is something else entirely.

While objective reality likely exists, our senses might not be able to access it well at all. We are limited beings with limited viewpoints and brains that begin to process sensory data the moment we acquire it. We must always be aware of our perspective, how that impacts what data we have access to, and that other perspectives may have a grain of truth to them…

Objective reality exists, but what can you know about it that isn’t subjective? Maybe not much: “You don’t see objective reality objectively: neuroscience catches up to philosophy.”

* Albert Einstein

###

As we ponder perspective, we might send thoughtful birthday greetings to Confucius; he was born on this date in 551 BCE. A a Chinese philosopher and politician of the Spring and Autumn period, he has been traditionally considered the paragon of Chinese sages and is widely considered one of the most important and influential individuals in human history, as his teachings and philosophy formed the basis of East Asian culture and society, and continue to remain influential across China and East Asia today.

His philosophical teachings, called Confucianism, emphasized personal and governmental morality, correctness of social relationships, justice, kindness, and sincerity. Confucianism was part of the Chinese social fabric and way of life; to Confucians, everyday life was the arena of religion. It was he who espoused the well-known principle “Do not do unto others what you do not want done to yourself,” the Golden Rule.

source

Written by (Roughly) Daily

September 28, 2021 at 1:00 am