(Roughly) Daily

Posts Tagged ‘moral philosophy

“The only advantage of not being too good a housekeeper is that your guests are so pleased to feel how very much better they are”*…

Roomba is on the rise, but is the humble carpet sweeper poised for a rebound? Edward Tenner considers…

Every so often technology critics charge that despite the exponential growth of computer power, the postwar dreams of automated living have been stalled. It is true that jetpacks are unlikely to go mainstream, and that fully autonomous vehicles are more distant than they appear, at least on local roads. And the new materials that promised what the historian of technology Jeffrey L. Meikle has called
“damp-cloth utopianism”—the vision of a future household where plastic-covered furnishings would allow carefree cleaning—have created dystopia in the world’s oceans.

Yet a more innocent dream, the household robot, has come far closer to reality: not, it is true, the anthropomorphic mechanical butler of science-fiction films, but a humbler machine that is still
impressive, the autonomous robotic vacuum cleaner. Consider, for example, the Roomba®. Twenty years after introducing the first model, the manufacturer, iRobot, sold itself to Amazon in August 2022 for
approximately $1.7 billion in cash. Since 2013, a unit has been part of the permanent collection of the Smithsonian’s National Museum of American History.

As the museum site notes, the first models found their way by bumping into furniture, walls, and other obstacles. They could not be programmed to stay out of areas of the home; an infrared-emitting
accessory was needed to create a “virtual wall.” Like smartphones, introduced a few years later, Roombas have acquired new features steadily with a new generation on average every year. (They have also inspired a range of products from rival manufacturers.) Over 35 million units have been sold. According to Fortune Business Insights Inc., the worldwide market was nearly $10 billion in 2020 and is estimated to increase from almost $12 billion in 2021 to $50.65 billion in 2028.

Adam Smith might applaud the Roomba as a triumph of the liberal world order he had endorsed. Thanks to the global market- place for design ideas, chips, and mechanical parts, he might remark,
a division of labor—Roomba is designed mainly in the United States by an international team and manufactured in China and Malaysia—has benefited consumers worldwide. Smith would nonetheless
disapprove of the economic nationalism of both the United States and China that has made managing high-technology manufacturing chains so challenging.

Yet Smith might also make a different kind of observation, high-lighting the technology’s limits rather than its capabilities…

Yet Smith might also make a different kind of observation, high-lighting the technology’s limits rather than its capabilities… Could household automation be not only irrelevant to fundamental human welfare, but harmful? As an omnivorous reader, Smith would no doubt discover in our medical literature the well-established dangers of sedentary living (he loved “long solitary walks by the Sea side”) and the virtues of getting up regularly to perform minor chores, such as turning lights on and off, adjusting the thermostat, and vacuuming the room, the same sorts of fidgeting that the Roomba and the entire Internet of Things are hailed as replacing. In fact the very speed of improvement of robotic vacuums may be a hazard in itself, as obsolescent models add to the accumulation of used batteries and environmentally hazardous electronic waste.

As the sustainability movement grows, there are signs of a revival of the humble carpet sweeper, invented in 1876, as sold by legacy brands like Fuller Brush and Bissell. They offer recycled plastic parts, independence of the electric grid, and freedom from worry about hackers downloading users’ home layouts from the robots’ increasingly sophisticated cloud storage…

Via the estimable Alan Jacobs and his wonderful Snakes and Ladders: “Adam Smith and the Roomba®” from @edward_tenner.

(Image above: source)

* Eleanor Roosevelt

###

As we get next to godliness, we might spare a thought for Waldo Semon; he died on this date in 1999. An inventor with over 100 patents, he is best known as the creator of “plasticized PVC” (or vinyl). The the world’s third most used plastic, vinyl is employed in imitation leather, garden hose, shower curtains, and coatings– but most frequently of all, in flooring tiles.

For his accomplishments, Semon was inducted into the Invention Hall of Fame in 1995 at the age of 97.

source

Written by (Roughly) Daily

May 26, 2023 at 1:00 am

“Philosophy is a battle against the bewitchment of our intelligence by means of language”*…

Clockwise from top: Iris Murdoch, Philippa Foot, Mary Midgley, Elizabeth Anscombe

How four women defended ethical thought from the legacy of positivism…

By Michaelmas Term 1939, mere weeks after the United Kingdom had declared war on Nazi Germany, Oxford University had begun a change that would wholly transform it by the academic year’s end. Men ages twenty and twenty-one, save conscientious objectors and those deemed physically unfit, were being called up, and many others just a bit older volunteered to serve. Women had been able to matriculate and take degrees at the university since 1920, but members of the then all-male Congregation had voted to restrict the number of women to fewer than a quarter of the overall student population. Things changed rapidly after the onset of war. The proportion of women shot up, and, in many classes, there were newly as many women as men.

Among the women who experienced these novel conditions were several who did courses in philosophy and went on to strikingly successful intellectual careers. Elizabeth Anscombe, noted philosopher and Catholic moral thinker who would go on occupy the chair in philosophy that Ludwig Wittgenstein had held at Cambridge, started a course in Greats—roughly, classics and philosophy—in 1937, as did Jean Austin (neé Coutts), who would marry philosopher J. L. Austin and later have a long teaching career at Oxford. Iris Murdoch, admired and beloved philosopher and novelist, began to read Greats in 1938 at the same time as Mary Midgley (neé Scrutton), who became a prominent public philosopher and animal ethicist. A year later Philippa Foot (neé Bosanquet), distinguished moral philosopher, started to read the then relatively new course PPE—philosophy, politics and economics—and three years after that Mary Warnock (neé Wilson), subsequently a high-profile educator and public intellectual, went up to read Greats.

Several of these women would go on to make groundbreaking contributions to ethics…

Oxford philosophy in the early to mid 1930s had been in upheaval. The strains of Hegel-inspired idealism that had remained influential in Britain through the first decade of the twentieth century had been definitively displaced, in the years before World War I, by realist doctrines which claimed that knowledge must be of what is independent of the knower, and which were elaborated within ethics into forms of intuitionism. By the ’30s, these schools of thought were themselves threatened by new waves of enthusiasm for the themes of logical positivism developed by a group of philosophers and scientists, led by Moritz Schlick, familiarly known as the Vienna Circle. Cambridge University’s Susan Stebbing, the first woman to be appointed to a full professorship in philosophy in the UK, had already interacted professionally with Schlick in England and had championed tenets of logical positivism in essays and public lectures when, in 1933, Oxford don Gilbert Ryle recommended that his promising tutee Freddie Ayer make a trip to Vienna. Ayer obliged, and upon his return he wrote a brief manifesto, Language, Truth and Logic (1936), in defense of some of the Vienna Circle’s views. The book became a sensation, attracting attention and debate far beyond the halls of academic philosophy. Its bombshell contention was that only two kinds statements are meaningful: those that are true solely in virtue of the meanings of their constituent terms (such as “all bachelors are unmarried”), and those that can be verified through physical observation. The gesture seemed to consign to nonsense, at one fell swoop, the statements of metaphysics, theology, and ethics.

This turn to “verification” struck some as a fitting response to strains of European metaphysics that many people, rightly or wrongly, associated with fascist irrationalism and the gathering threat of war. But not everyone at Oxford was sympathetic. Although Ayer’s ideas weren’t universally admired, they were widely discussed, including by a group of philosophers led by Isaiah Berlin, who met regularly at All Souls College—among them, J. L. Austin, Stuart Hampshire, Donald MacKinnon, Donald MacNabb, Anthony Woozley, and Ayer himself. Oxford philosophy’s encounter with logical positivism would have a lasting impact and would substantially set the terms for subsequent research in many areas of philosophy—including, it would turn out, ethics and political theory…

A fascinating intellectual history of British moral philosophy in the second half of the 20th century: “Metaphysics and Morals,” Alice Crary in @BostonReview.

* Ludwig Wittgenstein

###

As we ponder precepts, we might recall that it was on this date in 1248 that the seat of the action described above, The University of Oxford, received its Royal Charter from King Henry III.   While it has no known date of foundation, there is evidence of teaching as far back as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation (after the University of Bologna).

The university operates the world’s oldest university museum, as well as the largest university press in the world, and the largest academic library system in Britain.  Oxford has educated and/or employed many notables, including 72 Nobel laureates, 4 Fields Medalists, and 6 Turing Award winners, 27 prime ministers of the United Kingdom, and many heads of state and government around the world. 

42749697282_7a6203784e_o

 source

“Reality is merely an illusion, albeit a very persistent one”*…

Objective reality has properties outside the range of our senses (and for that matter, our instruments); and studies suggest that our brains warp sensory data as soon as we collect it. So we’d do well to remember that we don’t have– and likely won’t ever have– perfect information…

Many philosophers believe objective reality exists, if “objective” means “existing as it is independently of any perception of it.” However, ideas on what that reality actually is and how much of it we can interact with vary dramatically.

Aristotle argued, in contrast to his teacher Plato, that the world we interact with is as real as it gets and that we can have knowledge of it, but he thought that the knowledge we could have about it was not quite perfect. Bishop Berkeley thought everything existed as ideas in minds — he argued against the notion of physical matter — but that there was an objective reality since everything also existed in the mind of God. Immanuel Kant, a particularly influential Enlightenment philosopher, argued that while “the thing in itself” — an object as it exists independently of being subjectively observed — is real and exists, you cannot know anything about it directly.

Today, a number of metaphysical realists maintain that external reality exists, but they also suggest that our understanding of it is an approximation that we can improve upon. There are also direct realists who argue that we can interact with the world as it is, directly. They hold that many of the things we see when we interact with objects can be objectively known, though some things, like color, are subjective traits.

While it might be granted that our knowledge of the world is not perfect and is at least sometimes subjective, that doesn’t have to mean that the physical world doesn’t exist. The trouble is how we can go about knowing anything that isn’t subjective about it if we admit that our sensory information is not perfect.

As it turns out, that is a pretty big question.

Science both points toward a reality that exists independently of how any subjective observer interacts with it and shows us how much our viewpoints can get in the way of understanding the world as it is. The question of how objective science is in the first place is also a problem — what if all we are getting is a very refined list of how things work within our subjective view of the world?

Physical experiments like the Wigner’s Friend test show that our understanding of objective reality breaks down whenever quantum mechanics gets involved, even when it is possible to run a test. On the other hand, a lot of science seems to imply that there is an objective reality about which the scientific method is pretty good at capturing information.

Evolutionary biologist and author Richard Dawkins argues:

“Science’s belief in objective truth works. Engineering technology based upon the science of objective truth, achieves results. It manages to build planes that get off the ground. It manages to send people to the moon and explore Mars with robotic vehicles on comets. Science works, science produces antibiotics, vaccines that work. So anybody who chooses to say, ‘Oh, there’s no such thing as objective truth. It’s all subjective, it’s all socially constructed.’ Tell that to a doctor, tell that to a space scientist, manifestly science works, and the view that there is no such thing as objective truth doesn’t.”

While this leans a bit into being an argument from the consequences, he has a point: Large complex systems which suppose the existence of an objective reality work very well. Any attempt to throw out the idea of objective reality still has to explain why these things work.

A middle route might be to view science as the systematic collection of subjective information in a way that allows for intersubjective agreement between people. Under this understanding, even if we cannot see the world as it is, we could get universal or near-universal intersubjective agreement about something like how fast light travels in a vacuum. This might be as good as it gets, or it could be a way to narrow down what we can know objectively. Or maybe it is something else entirely.

While objective reality likely exists, our senses might not be able to access it well at all. We are limited beings with limited viewpoints and brains that begin to process sensory data the moment we acquire it. We must always be aware of our perspective, how that impacts what data we have access to, and that other perspectives may have a grain of truth to them…

Objective reality exists, but what can you know about it that isn’t subjective? Maybe not much: “You don’t see objective reality objectively: neuroscience catches up to philosophy.”

* Albert Einstein

###

As we ponder perspective, we might send thoughtful birthday greetings to Confucius; he was born on this date in 551 BCE. A a Chinese philosopher and politician of the Spring and Autumn period, he has been traditionally considered the paragon of Chinese sages and is widely considered one of the most important and influential individuals in human history, as his teachings and philosophy formed the basis of East Asian culture and society, and continue to remain influential across China and East Asia today.

His philosophical teachings, called Confucianism, emphasized personal and governmental morality, correctness of social relationships, justice, kindness, and sincerity. Confucianism was part of the Chinese social fabric and way of life; to Confucians, everyday life was the arena of religion. It was he who espoused the well-known principle “Do not do unto others what you do not want done to yourself,” the Golden Rule.

source

Written by (Roughly) Daily

September 28, 2021 at 1:00 am

“In the long run, we are all dead”*…

I’ve spent several decades thinking (and helping others think) abut the future: e.g., doing scenario planning via GBN and Heminge & Condell, working with The Institute for the Future, thinking with the folks at the Long Now Foundation; I deeply believe in the importance of long-term thinking. It’s a critical orientation– both a perspective and a set of tools/techniques– that can help us off-set our natural tendency to act in and for the short-run and help us be better, more responsible ancestors.

But two recent articles warn that “the long term” can be turned into a justification for all sorts of grief. The first, from Phil Torres (@xriskology), argues that “so-called rationalists” have created a disturbing secular religion– longtermism– that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites…

Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by [Oxford philosopher Nick] Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)

The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today.

[For more on posthumanism, see here and here]

The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’

The second, from Paul Graham Raven (@PaulGrahamRaven) builds on Torres’ case…

Phil Torres… does a pretty good job of setting out the issues with what might be the ultimate in moral philosophies, namely a moral philosophy whose adherents have convinced themselves that it is not at all a moral philosophy, but rather the end-game of the enlightenment-modernist quest for a fully rational and quantifiable way of legitimating the actions that you and your incredibly wealthy donors were already doing, and would like to continue doing indefinitely, regardless of the consequences to other lesser persons in the present and immediate future, thankyouverymuch.

I have one bone of contention, though the fault is not that of Torres but rather the Longtermists themselves: the labelling of their teleology as “posthuman”. This is exactly wrong, as their position is in fact the absolute core of transhumanism; my guess would be that the successful toxification of that latter term (within academia, as well as without) has led them to instead identify with the somewhat more accepted and established label of posthumanism, so as to avoid critique and/or use a totally different epistemology as a way of drawing fire…

[For more on transhumanism, see here and here]

Longtermism is merely a more acceptable mask for transhumanism

Both pieces are worth reading in full…

And for more on a posthuman (if not in every case posthumanist) future: “The best books about the post-human Earth.”

* John Maynard Keynes

###

As we take the long view, we might send far-sighted birthday greetings to John Flamsteed; he was born on this date in 1646. An astronomer, he compiled a 3,000-star catalogue, Catalogus Britannicus, and a star atlas called Atlas Coelestis, and made the first recorded observations of Uranus (though he mistakenly catalogued it as a star). Flamsteed led the group of scientists who convinced King Charles II to build the Greenwich Observatory, and personally laid its foundation stone. And he served as the first Astronomer Royal.

source

“Philosophy fails to give injustice its due”*…

 

I am an American Sklar

“Following evacuation orders, this [Oakland] store, at 13th and Franklin Streets, was closed. The owner, a University of California graduate of Japanese descent, placed the I AM AN AMERICAN sign on the store front on December 8, the day after Pearl Harbor. Evacuees of Japanese ancestry will be housed in War Relocation Authority centers for the duration.” Photo/caption: Dorothea Lange (3.13.42)

 

 

An astute commentator recently suggested that Isaiah Berlin would be Riga’s greatest political thinker ‒ if not for Judith N Shklar. We are seeing the beginning of a rediscovery of Shklar and her contribution to 20th-century intellectual life, but she remains something of an insider’s reference. Who was she and what did she have to say that is so important? How did this Jewish émigrée girl from Latvia come to be regarded by many legal and political theorists as one of the 20th century’s most important political thinkers?

Shklar is most often cited as a critic of mainstream liberal thought. During the Cold War in particular, liberalism served as an ideological weapon against the totalitarian threat of the former Soviet Union and its satellite states. But Shklar was concerned about the stifling dimensions of this kind of Western intellectual defence mechanism: it served merely to protect the status quo, and was very often a mere fig leaf for the accumulation of material wealth and for other, more problematic aspects of Western culture. It didn’t ignite any critical reflection or assist any self-awareness of how the liberties of Western democracy had arrived at such a perceived high standing. It was also silent about the fact that fascism had developed in countries that had been identified as pillars of Western civilisation.

In contrast to such self-congratulatory rhetoric, Shklar’s criticism aimed primarily at checking the easy optimism of Cold War liberalism, which, despite challenges to its authority, continues to maintain the inflated image of Western democracies. In Shklar’s view, liberalism is neither a state nor a final achievement. She understood, better than most, the fragility of liberal societies, and she wanted to preserve the liberties they made possible. Shklar saw the increasing availability of private consumer choice and the ever-expanding catalogue of rights often propounded in the name of liberalism as threats to the best achievements of Western democracies. In contrast to orthodox liberal arguments that aim at a summum bonum or common good, Shklar advocated a liberalism of fear, which holds in its sights the summum malum ‒ cruelty. Avoiding cruelty, and the suffering it causes, is the chief aim. Other vices such as hypocrisy, snobbery, arrogance, betrayal and misanthropy should be ranked in relation to this first vice…

Sklar

Shklar in her Harvard office

Judith Shklar and the dilemma of modern liberalism: “The theorist of belonging.”

* Judith Shklar

###

As we muse on morality, we might spare a thought for Robert Maynard Pirsig; he died on this date in 2017.  A philosopher, professor, and author, he is best remembered for two books Zen and the Art of Motorcycle Maintenance (an exploration into the nature of “quality” in the form of a memoir of a cross-country motorcycle trip) and Lila: An Inquiry into Morals (the account of a sailing journey on which Pirsig’s alter-ego develops a value-based metaphysics).

Pirsig2005_(cropped) source

 

%d bloggers like this: