(Roughly) Daily

Posts Tagged ‘philosophy

“We are what we pretend to be, so we must be careful about what we pretend to be”*…

There is just something obviously reasonable about the following notion: If all life is built from atoms that obey precise equations we know—which seems to be true—then the existence of life might just be some downstream consequence of these laws that we haven’t yet gotten around to calculating. This is essentially a physicist’s way of thinking, and to its credit, it has already done a great deal to help us understand how living things work…

But approaching the subject of life with this attitude will fail us, for at least two reasons. The first reason we might call the fallacy of reductionism. Reductionism is the presumption that any piece of the universe we might choose to study works like some specimen of antique, windup clockwork, so that it is easy (or at least eminently possible) to predict the behavior of the whole once you know the rules governing how each of its parts pushes on and moves with the others…

The second mistake in how people have viewed the boundary between life and non-life is still rampant in the present day and originates in the way we use language. A great many people imagine that if we understand physics well enough, we will eventually comprehend what life is as a physical phenomenon in the same way we now understand how and why water freezes or boils. Indeed, it often seems people expect that a good enough physical theory could become the new gold standard for saying what is alive and what is not.

However, this approach fails to acknowledge that our own role in giving names to the phenomena of the world precedes our ability to say with any clarity what it means to even call something alive. A physicist who wants to devise theories of how living things behave or emerge has to start by making intuitive choices about how to translate the characteristics of the examples of life we know into a physical language. After one has done so, it quickly becomes clear that the boundary between what is alive and what is not is something that already got drawn at the outset, through a different way of talking than physics provides…

Physics is an approach to science that roots itself in the measurement of particular quantities: distance, mass, duration, charge, temperature, and the like. Whether we are talking about making empirical observations or developing theories to make predictions, the language of physics is inherently metrical and mathematical. The phenomena of physics are always expressed in terms of how one set of measurable numbers behaves when other sets of measurable numbers are held fixed or varied. This is why the genius of Newton’s Second Law, F = ma, was not merely that it proposed a successful equation relating force (F), mass (m), and acceleration (a), but rather that it realized that these were all quantities in the world that could be independently measured and compared in order to discover such a general relationship.

This is not how the science of biology works. It is true that doing excellent research in biology involves trafficking in numbers, especially these days: For example, statistical methods help one gain confidence in trends discovered through repeated observations (such as a significant but small increase in the rate of cell death when a drug is introduced). Nonetheless, there is nothing fundamentally quantitative about the scientific study of life. Instead, biology takes the categories of living and nonliving things for granted as a starting point, and then uses the scientific method to investigate what is predictable about the behavior and qualities of life. Biologists did not have to go around convincing humanity that the world actually divides into things that are alive and things that are not; instead, in much the same way that it is quite popular across the length and breadth of human language to coin terms for commonplace things like stars, rivers, and trees, the difference between being alive and not being alive gets denoted with vocabulary.

In short, biology could not have been invented without the preexisting concept of life to inspire it, and all it needed to get going was for someone to realize that there were things to be discovered by reasoning scientifically about things that were alive. This means, though, that biology most certainly is not founded on mathematics in the way that physics is. Discovering that plants need sunlight to grow, or that fish will suffocate when taken out of water, requires no quantification of anything whatsoever. Of course, we could learn more by measuring how much sunlight the plant got, or timing how long it takes for the fish-out-of-water to expire. But the basic empirical law in biological terms only concerns itself with what conditions will enable or prevent thriving, and what it means to thrive comes from our qualitative and holistic judgment of what it looks like to succeed at being alive. If we are honest with ourselves, the ability to make this judgment was not taught to us by scientists, but comes from a more common kind of knowledge: We are alive ourselves, and constantly mete out life and death to bugs and flowers in our surroundings. Science may help us to discover new ways to make things live or die, but only once we tell the scientists how to use those words. We did not know any physics when we invented the word “life,” and it would be strange if physics only now began suddenly to start dictating to us what the word means.

The origin of life can’t be explained by first principles: “Why Physics Can’t Tell Us What Life Is.”

See also this interview with Jeremy England, the author of the article linked above (and of the book from which it is excerpted): “The Physicist’s New Book of Life.”

  • Kurt Vonnegut, Mother Night

###

As we live and let live, we might spare a thought for Ernest Everett Just; he died on this date in 1941.  A pioneering biologist, academic, and science writer, he contributed mightily to the understanding of cell division, the fertilization of egg cells, experimental parthenogenesis, hydration, cell division, dehydration in living cells, and the effect of ultra violet rays on egg cells.

An African-American, he had limited academic prospects on his graduation from Dartmouth, but was able to land a teaching spot at Howard University.  Just met  Frank R. Lillie, the head of the Department of Zoology at the University of Chicago and director of the Marine Biological Laboratory (MBL) at Woods Hole, Mass.  In 1909 Lillie invited Just to spend first one, then several summers at Woods Hole, where Just pioneered the study of whole cells under normal conditions (rather than simply breaking them apart in a laboratory setting).  In 1915, Just was awarded the first Spingarn Medal, the highest honor given by the NAACP.

But outside MBL, Just experienced discrimination.  Seeking more opportunity he spent most of the 1930s in various European universities– until the outbreak of WW II hostilities caused him to return to the U.S. in late 1940.  He died of pancreatic cancer on this date the next year.

Ernest_Everett_Just

 source

Written by LW

October 27, 2020 at 1:01 am

“It is the province of knowledge to speak, and it is the privilege of wisdom to listen”*…

Wisdom is full of paradoxes. It is one of the oldest topics in the intellectual history of humanity, and yet talking about wisdom can feel odd and disingenuous. People seem to have intuitions about who is and isn’t wise, but if you press them to define wisdom, they will hesitate. Wisdom, with its mystical qualities, sits on a pedestal, inspiring awe and trepidation, a bit of hushed reverence thrown in. It’s easy to distil wisdom’s archetypes in history (druids, Sufi sages) or popular culture (Star Wars’ Yoda, or Harry Potter’s Dumbledore), but harder to apply to the person on the street. Most people would agree that wisdom is desirable, yet what exactly is it?…

Some psychologists are increasingly confident that they can now measure– and nurture– wisdom, superseding the “speculations” of philosophy and religion: “The Science of Wisdom.”

* Oliver Wendell Holmes

###

As we savor sagacity, we might recall that it was on this date in 1964 that Jean-Paul Sartre was awarded the Nobel Prize in Literature despite attempting to refuse it, saying that he always declined official honors since “a writer should not allow himself to be turned into an institution.”

source

Written by LW

October 22, 2020 at 1:01 am

“Simulation is the situation created by any system of signs when it becomes sophisticated enough, autonomous enough, to abolish its own referent and to replace it with itself”*…

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer…

Learn why gauging whether or not we dwell inside someone else’s computer may come down to advanced AI research—or measurements at the frontiers of cosmology: “Do We Live in a Simulation? Chances Are about 50–50.”

* Jean Baudrillard (who was describing the ways in which the significations and symbolism of culture and media are involved in constructing an understanding of shared existence… which may or may not, itself, be a simulation)

###

As we play the odds, we might send dark birthday greetings to Friedrich Wilhelm Nietzsche; he was born on this date in 1844. A philosopher, cultural critic, composer, poet, and philologist, he and his work have had a profound influence on modern intellectual history.

Nietzsche became the youngest person ever to hold the Chair of Classical Philology at the University of Basel in 1869 at the age of 24, but resigned in 1879 due to health problems that plagued him most of his life. He completed much of his core writing in the following decade, before suffering a complete mental breakdown in 1889, after which he lived in care until his death in 1900.

Nietzsche’s writing spanned philosophical polemics, poetry, cultural criticism, and fiction, all the while displaying a fondness for aphorism and irony. He’s best remembered as a philosopher, for work that included his radical critique of truth in favor of perspectivism; for his genealogical critique of religion and Christian morality (and his related theory of master–slave morality); for is aesthetic affirmation of existence in response to his famous observation of the “death of God” and the profound crisis of nihilism; for his notion of the Apollonian and Dionysian; and for his characterization of the human subject as the expression of competing wills, collectively understood as the will to power. Nietzsche also developed influential concepts such as the Übermensch and the doctrine of eternal return.

After his death, his sister Elisabeth became the curator and editor of Nietzsche’s manuscripts. She edited his unpublished writings to fit her German nationalist beliefs– often contradicting or obfuscating Nietzsche’s stated opinions, which were explicitly opposed to antisemitism and nationalism. Through her published editions, Nietzsche’s work became associated with fascism and Nazism. But scholars contested this interpretation, and corrected editions of his writings were soon made available. Nietzsche’s thought enjoyed renewed popularity in the 1960s and his ideas have since had a profound impact on 20th and early-21st century thinkers across philosophy—especially in schools of continental philosophy such as existentialism, postmodernism and post-structuralism—as well as in art, literature, psychology, politics, and popular culture.

source

Written by LW

October 15, 2020 at 1:01 am

“Justice is the first virtue of social institutions, as truth is of systems of thought”*…

John Rawls (1921–2002) was the most important political philosopher of his age. His 1971 book A Theory of Justice, which offered a philosophical basis for liberal egalitarianism, also supplied the raw material for an entire “Rawlsian” school of thought. But the reputation of Rawls in the academic world grew just as conservative forces committed to fostering greater inequality were becoming dominant, especially in the Anglo-American countries where Rawlsian ideas were most influential…

His philosophical vision of a just society, which embodied the postwar liberal dream of a more perfect America, became the basis for a philosophy known as “liberal egalitarianism.”

Katrina Forrester (@katforrester) on “How John Rawls Became the Liberal Philosopher of a Conservative Age.”

* “Justice is the first virtue of social institutions, as truth is of systems of thought… First: each person is to have an equal right to the most extensive liberty compatible with similar liberty for others. Second: social and economic inequalities are to be arranged so that they are both (a) reasonably expected to be to everyone’s advantage, and (b) attached to positions and offices open to all. – John Rawls, A Theory of Justice

###

As we play fair, we might recall that it was on this date in 1792 that a group of 12 Freemasons laid the cornerstone of The White House.  Eight years later, John and Abigail Adams moved in.

The White House was designed by James Hoban, an Irish immigrant architect living in Charleston, South Carolina, who won a competition for the commission (and a $500 prize) with a design modeled after Leinster House in Dublin, Ireland.  He beat out a future resident, Thomas Jefferson, whose Monticello/UVa-like design was among the many losers.

It’s not known whether there was anything contained within the cornerstone.  In fact, though the building stills stands (albeit rebuilt and expanded after being burned down during the War of 1812), the whereabouts the stone itself are a bit of a mystery.

 source

Written by LW

October 13, 2020 at 1:01 am

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it”*…

What is it like to be a smartphone? In all the chatter about the future of artificial intelligence, the question has been glossed over or, worse, treated as settled. The longstanding assumption, a reflection of the anthropomorphic romanticism of computer scientists, science fiction writers, and internet entrepreneurs, has been that a self-aware computer would have a mind, and hence a consciousness, similar to our own. We, supreme programmers, would create machine consciousness in our own image.

The assumption is absurd, and not just because the sources and workings of our own consciousness remain unknown to us and hence unavailable as models for coders and engineers. Consciousness is entwined with being, and being with body, and a computer’s body and (speculatively) being have nothing in common with our own. A far more reasonable assumption is that the consciousness of a computer, should it arise, would be completely different from the consciousness of a human being. It would be so different that we probably wouldn’t even recognize it as a consciousness…

The Turing test, in all its variations, would also be useless in identifying an AI. It merely tests for a machine’s ability to feign likeness with ourselves. It provides no insight into the AI’s being, which, again, could be entirely separate from its ability to trick us into sensing it is like us. The Turing test tells us about our own skills; it says nothing about the character of the artificial being.

All of this raises another possibility. It may be that we are already surrounded by AIs but have no idea that they exist. Their beingness is invisible to us, just us ours is to them. We are both objects in the same place, but as beings we inhabit different universes. Our smartphones may right now be having, to borrow [philosopher Thomas] Nagel’s words, “experiences fully comparable in richness of detail to our own.”

Look at your phone. You see a mere tool, there to do your bidding, and perhaps that’s the way your phone sees you, the dutiful but otherwise unremarkable robot that from time to time plugs it into an electrical socket.

Nicholas Carr (@roughtype) applies a well-known thought-experiment on the nature of consciousness to artificial intelligence: “What is it like to be a smartphone?

See also here, the source of the image above.

* Eliezer Yudkowsky

###

As we wonder, we might spare a thought for Seymour Roger Cray; he died on this date in 1996.  An electrical engineer and computer architect, he designed a series of computers that were the fastest in the world for decades, and founded Cray Research which built many of these machines– effectively creating the supercomputer industry and earning the honorific “father of supercomputing.”

Seymour_Cray
With a Cray-1

source

%d bloggers like this: