(Roughly) Daily

Posts Tagged ‘Mathematics

“Nature is pleased with simplicity”*…

As Clare Booth Luce once said, sometimes “simplicity is the ultimate sophistication”…

… The uniformity of the cosmic microwave background (CMB) tells us that, at its birth, ‘the Universe has turned out to be stunningly simple,’ as Neil Turok, director emeritus of the Perimeter Institute for Theoretical Physics in Ontario, Canada, put it at a public lecture in 2015. ‘[W]e don’t understand how nature got away with it,’ he added. A few decades after Penzias and Wilson’s discovery, NASA’s Cosmic Background Explorer satellite measured faint ripples in the CMB, with variations in radiation intensity of less than one part in 100,000. That’s a lot less than the variation in whiteness you’d see in the cleanest, whitest sheet of paper you’ve ever seen.

Wind forward 13.8 billion years, and, with its trillions of galaxies and zillions of stars and planets, the Universe is far from simple. On at least one planet, it has even managed to generate a multitude of life forms capable of comprehending both the complexity of our Universe and the puzzle of its simple origins. Yet, despite being so rich in complexity, some of these life forms, particularly those we now call scientists, retain a fondness for that defining characteristic of our primitive Universe: simplicity.

The Franciscan friar William of Occam (1285-1347) wasn’t the first to express a preference for simplicity, though he’s most associated with its implications for reason. The principle known as Occam’s Razor insists that, given several accounts of a problem, we should choose the simplest. The razor ‘shaves off’ unnecessary explanations, and is often expressed in the form ‘entities should not be multiplied beyond necessity’. So, if you pass a house and hear barking and purring, then you should think a dog and a cat are the family pets, rather than a dog, a cat and a rabbit. Of course, a bunny might also be enjoying the family’s hospitality, but the existing data provides no support for the more complex model. Occam’s Razor says that we should keep models, theories or explanations simple until proven otherwise – in this case, perhaps until sighting a fluffy tail through the window.

Seven hundred years ago, William of Occam used his razor to dismantle medieval science or metaphysics. In subsequent centuries, the great scientists of the early modern era used it to forge modern science. The mathematician Claudius Ptolemy’s (c100-170 CE) system for calculating the motions of the planets, based on the idea that the Earth was at the centre, was a theory of byzantine complexity. So, when Copernicus (1473-1543) was confronted by it, he searched for a solution that ‘could be solved with fewer and much simpler constructions’. The solution he discovered – or rediscovered, as it had been proposed in ancient Greece by Aristarchus of Samos, but then dismissed by Aristotle – was of course the solar system, in which the planets orbit around the Sun. Yet, in Copernicus’s hands, it was no more accurate than Ptolemy’s geocentric system. Copernicus’s only argument in favour of heliocentricity was that it was simpler.

Nearly all the great scientists who followed Copernicus retained Occam’s preference for simple solutions. In the 1500s, Leonardo da Vinci insisted that human ingenuity ‘will never devise any [solutions] more beautiful, nor more simple, nor more to the purpose than Nature does’. A century or so later, his countryman Galileo claimed that ‘facts which at first seem improbable will, even on scant explanation, drop the cloak which has hidden them and stand forth in naked and simple beauty.’ Isaac Newton noted in his Principia (1687) that ‘we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances’; while in the 20th century Einstein is said to have advised that ‘Everything should be made as simple as possible, but not simpler.’ In a Universe seemingly so saturated with complexity, what work does simplicity do for us?

Part of the answer is that simplicity is the defining feature of science. Alchemists were great experimenters, astrologers can do maths, and philosophers are great at logic. But only science insists on simplicity…

Just why do simpler laws work so well? The statistical approach known as Bayesian inference, after the English statistician Thomas Bayes (1702-61), can help explain simplicity’s power. Bayesian inference allows us to update our degree of belief in an explanation, theory or model based on its ability to predict data. To grasp this, imagine you have a friend who has two dice. The first is a simple six-sided cube, and the second is more complex, with 60 sides that can throw 60 different numbers. Suppose your friend throws one of the dice in secret and calls out a number, say 5. She asks you to guess which dice was thrown. Like astronomical data that either the geocentric or heliocentric system could account for, the number 5 could have been thrown by either dice. Are they equally likely? Bayesian inference says no, because it weights alternative models – the six- vs the 60-sided dice – according to the likelihood that they would have generated the data. There is a one-in-six chance of a six-sided dice throwing a 5, whereas only a one-in-60 chance of the 60-sided dice throwing a 5. Comparing likelihoods, then, the six-sided dice is 10 times more likely to be the source of the data than the 60-sided dice.

Simple scientific laws are preferred, then, because, if they fit or fully explain the data, they’re more likely to be the source of it.

In my latest book, I propose a radical, if speculative, solution for why the Universe might in fact be as simple as it’s possible to be. Its starting point is the remarkable theory of cosmological natural selection (CNS) proposed by the physicist Lee Smolin. CNS proposes that, just like living creatures, universes have evolved through a cosmological process, analogous to natural selection.

Smolin came up with CNS as a potential solution to what’s called the fine-tuning problem: how the fundamental constants and parameters, such as the masses of the fundamental particles or the charge of an electron, got to be the precise values needed for the creation of matter, stars, planets and life. CNS first notes the apparent symmetry between the Big Bang, in which stars and particles were spewed out of a dimensionless point at the birth of our Universe, and the Big Crunch, the scenario for the end of our Universe when a supermassive black hole swallows up stars and particles before vanishing back into a dimensionless point. This symmetry has led many cosmologists to propose that black holes in our Universe might be the ‘other side’ of Big Bangs of other universes, expanding elsewhere. In this scenario, time did not begin at the Big Bang, but continues backwards through to the death of its parent universe in a Big Crunch, through to its birth from a black hole, and so on, stretching backward in time, potentially into infinity. Not only that but, since our region of the Universe is filled with an estimated 100 billion supermassive black holes, Smolin proposes that each is the progenitor of one of 100 billion universes that have descended from our own.

The model Smolin proposed includes a kind of universal self-replication process, with black holes acting as reproductive cells. The next ingredient is heredity. Smolin proposes that each offspring universe inherits almost the same fundamental constants of its parent. The ‘almost’ is there because Smolin suggests that, in a process analogous to mutation, their values are tweaked as they pass through a black hole, so baby universes become slightly different from their parent. Lastly, he imagines a kind of cosmological ecosystem in which universes compete for matter and energy. Gradually, over a great many cosmological generations, the multiverse of universes would become dominated by the fittest and most fecund universes, through their possession of those rare values of the fundamental constants that maximise black holes, and thereby generate the maximum number of descendant universes.

Smolin’s CNS theory explains why our Universe is finely tuned to make many black holes, but it does not account for why it is simple. I have my own explanation of this, though Smolin himself is not convinced. First, I point out that natural selection carries its own Occam’s Razor that removes redundant biological features through the inevitability of mutations. While most mutations are harmless, those that impair vital functions are normally removed from the gene pool because the individuals carrying them leave fewer descendants. This process of ‘purifying selection’, as it’s known, maintains our genes, and the functions they encode, in good shape.

However, if an essential function becomes redundant, perhaps by a change of environment, then purifying selection no longer works. For example, by standing upright, our ancestors lifted their noses off the ground, so their sense of smell became less important. This means that mutations could afford to accumulate in the newly dispensable genes, until the functions they encoded were lost. For us, hundreds of smell genes accumulated mutations, so that we lost the ability to detect hundreds of odours that we no longer need to smell. This inevitable process of mutational pruning of inessential functions provides a kind of evolutionary Occam’s Razor that removes superfluous biological complexity.

Perhaps a similar process of purifying selection operates in cosmological natural selection to keep things simple…

It’s unclear whether the kind of multiverse envisaged by Smolin’s theory is finite or infinite. If infinite, then the simplest universe capable of forming black holes will be infinitely more abundant than the next simplest universe. If instead the supply of universes is finite, then we have a similar situation to biological evolution on Earth. Universes will compete for available resources – matter and energy – and the simplest that convert more of their mass into black holes will leave the most descendants. For both scenarios, if we ask which universe we are most likely to inhabit, it will be the simplest, as they are the most abundant. When inhabitants of these universes peer into the heavens to discover their cosmic microwave background and perceive its incredible smoothness, they, like Turok, will remain baffled at how their universe has managed to do so much from such a ‘stunningly simple’ beginning.

The cosmological razor idea has one further startling implication. It suggests that the fundamental law of the Universe is not quantum mechanics, or general relativity or even the laws of mathematics. It is the law of natural selection discovered by Darwin and Wallace. As the philosopher Daniel Dennett insisted, it is ‘The single best idea anyone has ever had.’ It might also be the simplest idea that any universe has ever had.

Does the existence of a multiverse hold the key for why nature’s laws seem so simple? “Why simplicity works,” from JohnJoe McFadden (@johnjoemcfadden)

* “Nature does nothing in vain when less will serve; for Nature is pleased with simplicity and affects not the pomp of superfluous causes.” – Isaac Newton, The Mathematical Principles of Natural Philosophy

###

As we emphasize the essential, we might spare a thought for Martin Gardner; he died on this date in 2010. Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

 source

“The world is bound in secret knots”*…

It’s knot easy, but it’s important, to understand knots…

From whimsical flower crowns to carelessly tied shoelaces to hopelessly tangled headphones, knots are everywhere. 

That’s not surprising, as knots are quite ancient, predating both the use of the axe and of the wheel and potentially even the divergence of humans from other apes. After all, ropes and cords are practically useless without being tied to something else, making one of the most ancient technologies still remarkably relevant today.

But these tie-offs can be a problem, since knots actually decrease the strength of a rope. When a rope made up of multiple fibers is taut, those fibers all share equal portions of the load. However, the bending and compression where the knot forces the rope to curve (usually around itself, or around the thing it is tied to) create extra tension in only some of the fibers. That’s where the rope will break if yanked with too much force. And this isn’t a small effect: common knots generally reduce the strength of a rope by 20 percent for the strongest ones, to over 50 percent for a simple overhand knot.

Experience has taught surgeons, climbers, and sailors which knots are best for sewing up a patient, or rescuing someone from a ravine, or tying off a billowing sail, but until some recent research from a group at MIT it was hard to tell what actually makes one knot better than another… 

Which knot is the strongest? “The tangled physics of knots, one of our simplest and oldest technologies,” from Margaux Lopez (@margaux_lopez_).

See also: “The twisted math of knot theory can help you tell an overhand knot from an unknot.”

Athanasius Kircher

###

As we understand the over and under, we might send constructive birthday greetings to John “Blind Jack” Metcalf; he was born on this date in 1717. Blind from the age of six, he was an accomplished diver, swimmer, card player, and fiddler. But he is best remembered for his work between 1765 and 1792 when he emerged as the first professional road builder in the Industrial Revolution. He laid about 180 miles of turnpike road, mainly in the north of England– and became known as one of the “fathers of the modern road.”

Just before his death, he documented his remarkably eventful life; you can ready it here.

source

“Everything we care about lies somewhere in the middle, where pattern and randomness interlace”*…

True randomness (it’s lumpy)

We tend dramatically to underestimate the role of randomness in the world…

Arkansas was one out away from the 2018 College World Series championship, leading Oregon State in the series and 3-2 in the ninth inning of the game when Cadyn Grenier lofted a foul pop down the right-field line. Three Razorbacks converged on the ball and were in position to make a routine play on it, only to watch it fall untouched to the ground in the midst of them. Had any one of them made the play, Arkansas would have been the national champion.

Nobody did.

Given “another lifeline,” Grenier hit an RBI single to tie the game before Trevor Larnach launched a two-run homer to give the Beavers a 5-3 lead and, ultimately, the game. “As soon as you see the ball drop, you know you have another life,” Grenier said. “That’s a gift.” The Beavers accepted the gift eagerly and went on win the championship the next day as Oregon State rode freshman pitcher Kevin Abel to a 5-0 win over Arkansas in the deciding game of the series. Abel threw a complete game shutout and retired the last 20 hitters he faced.

The highly unlikely happens pretty much all the time…

We readily – routinely – underestimate the power and impact of randomness in and on our lives. In his book, The Drunkard’s Walk, Caltech physicist Leonard Mlodinow employs the idea of the “drunkard’s [random] walk” to compare “the paths molecules follow as they fly through space, incessantly bumping, and being bumped by, their sister molecules,” with “our lives, our paths from college to career, from single life to family life, from first hole of golf to eighteenth.” 

Although countless random interactions seem to cancel each another out within large data sets, sometimes, “when pure luck occasionally leads to a lopsided preponderance of hits from some particular direction…a noticeable jiggle occurs.” When that happens, we notice the unlikely directional jiggle and build a carefully concocted story around it while ignoring the many, many random, counteracting collisions.

As Tversky and Kahneman have explained, “Chance is commonly viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium. In fact, deviations are not ‘corrected’ as a chance process unfolds, they are merely diluted.”

As Stephen Jay Gould famously argued, were we able to recreate the experiment of life on Earth a million different times, nothing would ever be the same, because evolution relies upon randomness. Indeed, the essence of history is contingency.

Randomness rules.

Luck matters. A lot. Yet, we tend dramatically to underestimate the role of randomness in the world.

The self-serving bias is our tendency to see the good stuff that happens as our doing (“we worked really hard and executed the game plan well”) while the bad stuff isn’t our fault (“It just wasn’t our night” or “we simply couldn’t catch a break” or “we would have won if the umpiring hadn’t been so awful”). Thus, desirable results are typically due to our skill and hard work — not luck — while lousy results are outside of our control and the offspring of being unlucky.

Two fine books undermine this outlook by (rightly) attributing a surprising amount of what happens to us — both good and bad – to luck. Michael Mauboussin’s The Success Equation seeks to untangle elements of luck and skill in sports, investing, and business. Ed Smith’s Luck considers a number of fields – international finance, war, sports, and even his own marriage – to examine how random chance influences the world around us. For example, Mauboussin describes the “paradox of skill” as follows: “As skill improves, performance becomes more consistent, and therefore luck becomes more important.” In investing, therefore (and for example), as the population of skilled investors has increased, the variation in skill has narrowed, making luck increasingly important to outcomes.

On account of the growth and development of the investment industry, John Bogle could quite consistently write his senior thesis at Princeton on the successes of active fund management and then go on to found Vanguard and become the primary developer and intellectual forefather of indexing. In other words, the ever-increasing aggregate skill (supplemented by massive computing power) of the investment world has come largely to cancel itself out.

After a big or revolutionary event, we tend to see it as having been inevitable. Such is the narrative fallacy. In this paper, ESSEC Business School’s Stoyan Sgourev notes that scholars of innovation typically focus upon the usual type of case, where incremental improvements rule the day. Sgourev moves past the typical to look at the unusual type of case, where there is a radical leap forward (equivalent to Thomas Kuhn’s paradigm shifts in science), as with Picasso and Les Demoiselles

As Sgourev carefully argued, the Paris art market of Picasso’s time had recently become receptive to the commercial possibilities of risk-taking. Thus, artistic innovation was becoming commercially viable. Breaking with the past was then being encouraged for the first time. It would soon be demanded.

Most significantly for our purposes, Sgourev’s analysis of Cubism suggests that having an exceptional idea isn’t enough. For radical innovation really to take hold, market conditions have to be right, making its success a function of luck and timing as much as genius. Note that Van Gogh — no less a genius than Picasso — never sold a painting in his lifetime.

As noted above, we all like to think that our successes are earned and that only our failures are due to luck – bad luck. But the old expression – it’s better to be lucky than good – is at least partly true. That said, it’s best to be lucky *and* good. As a consequence, in all probabilistic fields (which is nearly all of them), the best performers dwell on process and diversify their bets. You should do the same…

As [Nate] Silver emphasizes in The Signal and the Noise, we readily overestimate the degree of predictability in complex systems [and t]he experts we see in the media are much too sure of themselves (I wrote about this problem in our industry from a slightly different angle…). Much of what we attribute to skill is actually luck.

Plan accordingly.

Taking the unaccountable into account: “Randomness Rules,” from Bob Seawright (@RPSeawright), via @JVLast

[image above: source]

* James Gleick, The Information: A History, a Theory, a Flood

###

As we contemplate chance, we might spare a thought for Oskar Morgenstern; he died on this date in 1977. An economist who fled Nazi Germany for Princeton, he collaborated with the mathematician John von Neumann to write Theory of Games and Economic Behavior, published in 1944, which is recognized as the first book on game theory— thus co-founding the field.

Game theory was developed extensively in the 1950s, and has become widely recognized as an important tool in many fields– perhaps especially in the study of evolution. Eleven game theorists have won the economics Nobel Prize, and John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory.

Game theory’s roots date back (at least) to the 1654 letters between Pascal and Fermat, which (along with work by Cardano and Huygens) marked the beginning of probability theory. (See Peter Bernstein’s marvelous Against the Gods.) The application of probability (Bayes’ rule, discrete and continuous random variables, and the computation of expectations) accounts for the utility of game theory; the role of randomness (along with the behavioral psychology of a game’s participants) explain why it’s not a perfect predictor.

source

Written by (Roughly) Daily

July 26, 2021 at 1:00 am

“If the doors of perception were cleansed everything would appear to man as it is, infinite”*…

For 50 years, mathematicians have believed that the total number of real numbers is unknowable. A new proof suggests otherwise…

Infinity comes in many sizes. In 1873, the German mathematician Georg Cantor shook math to the core when he discovered that the “real” numbers that fill the number line — most with never-ending digits, like 3.14159… — outnumber “natural” numbers like 1, 2 and 3, even though there are infinitely many of both.

Infinite sets of numbers mess with our intuition about size, so as a warmup, compare the natural numbers {1, 2, 3, …} with the odd numbers {1, 3, 5, …}. You might think the first set is bigger, since only half its elements appear in the second set. Cantor realized, though, that the elements of the two sets can be put in a one-to-one correspondence. You can pair off the first elements of each set (1 and 1), then pair off their second elements (2 and 3), then their third (3 and 5), and so on forever, covering all elements of both sets. In this sense, the two infinite sets have the same size, or what Cantor called “cardinality.” He designated their size with the cardinal number 0 (“aleph-zero”).

But Cantor discovered that natural numbers can’t be put into one-to-one correspondence with the continuum of real numbers. For instance, try to pair 1 with 1.00000… and 2 with 1.00001…, and you’ll have skipped over infinitely many real numbers (like 1.000000001…). You can’t possibly count them all; their cardinality is greater than that of the natural numbers.

Sizes of infinity don’t stop there. Cantor discovered that any infinite set’s power set — the set of all subsets of its elements — has larger cardinality than it does. Every power set itself has a power set, so that cardinal numbers form an infinitely tall tower of infinities.

Standing at the foot of this forbidding edifice, Cantor focused on the first couple of floors. He managed to prove that the set formed from all the different ways of ordering natural numbers (from smallest to largest, for example, or with all odd numbers first) has cardinality 1, one level up from the natural numbers. Moreover, each of these “order types” encodes a real number.

His continuum hypothesis asserts that this is exactly the size of the continuum — that there are precisely 1 real numbers. In other words, the cardinality of the continuum immediately follow 0, the cardinality of the natural numbers, with no sizes of infinity in between.

But to Cantor’s immense distress, he couldn’t prove it.

In 1900, the mathematician David Hilbert put the continuum hypothesis first on his famous list of 23 math problems to solve in the 20th century. Hilbert was enthralled by the nascent mathematics of infinity — “Cantor’s paradise,” as he called it — and the continuum hypothesis seemed like its lowest-hanging fruit.

To the contrary, shocking revelations last century turned Cantor’s question into a deep epistemological conundrum.

The trouble arose in 1931, when the Austrian-born logician Kurt Gödel discovered that any set of axioms that you might posit as a foundation for mathematics will inevitably be incomplete. There will always be questions that your list of ground rules can’t settle, true mathematical facts that they can’t prove. As Gödel suspected right away, the continuum hypothesis is such a case: a problem that’s independent of the standard axioms of mathematics.

These axioms, 10 in all, are known as ZFC (for “Zermelo-Fraenkel axioms with the axiom of choice”), and they undergird almost all of modern math. The axioms describe basic properties of collections of objects, or sets. Since virtually everything mathematical can be built out of sets (the empty set {} denotes 0, for instance; {{}} denotes 1; {{},{{}}} denotes 2, and so on), the rules of sets suffice for constructing proofs throughout math.

In 1940, Gödel showed that you can’t use the ZFC axioms to disprove the continuum hypothesis. Then in 1963, the American mathematician Paul Cohen showed the opposite —you can’t use them to prove it, either. Cohen’s proof, together with Gödel’s, means the continuum hypothesis is independent of the ZFC axioms; they can have it either way.

In addition to the continuum hypothesis, most other questions about infinite sets turn out to be independent of ZFC as well. This independence is sometimes interpreted to mean that these questions have no answer, but most set theorists see that as a profound misconception.

They believe the continuum has a precise size; we just need new tools of logic to figure out what that is. These tools will come in the form of new axioms. “The axioms do not settle these problems,” said Magidor, so “we must extend them to a richer axiom system.” It’s ZFC as a means to mathematical truth that’s lacking — not truth itself.

Ever since Cohen, set theorists have sought to shore up the foundations of infinite math by adding at least one new axiom to ZFC. This axiom should illuminate the structure of infinite sets, engender natural and beautiful theorems, avoid fatal contradictions, and, of course, settle Cantor’s question…

Two rival axioms emerged that do just that. For decades, they were suspected of being logically incompatible.

In October 2018, David Asperó was on holiday in Italy, gazing out a car window as his girlfriend drove them to their bed-and-breakfast, when it came to him: the missing step of what’s now a landmark new proof about the sizes of infinity. “It was this flash experience,” he said.

Asperó, a mathematician at the University of East Anglia in the United Kingdom, contacted the collaborator with whom he’d long pursued the proof, Ralf Schindler of the University of Münster in Germany, and described his insight. “It was completely incomprehensible to me,” Schindler said. But eventually, the duo turned the phantasm into solid logic.

Their proof, which appeared in May in the Annals of Mathematics, unites two rival axioms that have been posited as competing foundations for infinite mathematics. Asperó and Schindler showed that one of these axioms implies the other, raising the likelihood that both axioms — and all they intimate about infinity — are true…

There are an infinite number of infinities. Which one corresponds to the real numbers? “How Many Numbers Exist? Infinity Proof Moves Math Closer to an Answer.”

[TotH to MK]

* William Blake

###

As we contemplate counting, we might spare a thought for Georg Friedrich Bernhard Riemann; he died on this date in 1866. A mathematician who made contributions to analysis, number theory, and differential geometry, he is remembered (among other things) for his 1859 paper on the prime-counting function, containing the original statement of the Riemann hypothesis, regarded as one of the most influential papers in analytic number theory.

source

“Several thousand years from now, nothing about you as an individual will matter. But what you did will have huge consequences.”*…

In 2013, a philosopher and ecologist named Timothy Morton proposed that humanity had entered a new phase. What had changed was our relationship to the nonhuman. For the first time, Morton wrote, we had become aware that “nonhuman beings” were “responsible for the next moment of human history and thinking.” The nonhuman beings Morton had in mind weren’t computers or space aliens but a particular group of objects that were “massively distributed in time and space.” Morton called them “hyperobjects”: all the nuclear material on earth, for example, or all the plastic in the sea. “Everyone must reckon with the power of rising waves and ultraviolet light,” Morton wrote, in “Hyperobjects: Philosophy and Ecology After the End of the World.” Those rising waves were being created by a hyperobject: all the carbon in the atmosphere.

Hyperobjects are real, they exist in our world, but they are also beyond us. We know a piece of Styrofoam when we see it—it’s white, spongy, light as air—and yet fourteen million tons of Styrofoam are produced every year; chunks of it break down into particles that enter other objects, including animals. Although Styrofoam is everywhere, one can never point to all the Styrofoam in the world and say, “There it is.” Ultimately, Morton writes, whatever bit of Styrofoam you may be interacting with at any particular moment is only a “local manifestation” of a larger whole that exists in other places and will exist on this planet millennia after you are dead. Relative to human beings, therefore, Styrofoam is “hyper” in terms of both space and time. It’s not implausible to say that our planet is a place for Styrofoam more than it is a place for people.

When “Hyperobjects” was published, philosophers largely ignored it. But Morton, who uses the pronouns “they” and “them,” quickly found a following among artists, science-fiction writers, pop stars, and high-school students. The international curator and art-world impresario Hans Ulrich Obrist began citing Morton’s ideas; Morton collaborated on a talk with Laurie Anderson and helped inspire “Reality Machines,” an installation by the Icelandic-Danish artist Olafur Eliasson. Kim Stanley Robinson and Jeff VanderMeer—prominent sci-fi writers who also deal with ecological themes—have engaged with Morton’s work; Björk blurbed Morton’s book “Being Ecological,” writing, “I have been reading Tim Morton’s books for a while and I like them a lot.”

The problem with hyperobjects is that you cannot experience one, not completely. You also can’t not experience one. They bump into you, or you bump into them; they bug you, but they are also so massive and complex that you can never fully comprehend what’s bugging you. This oscillation between experiencing and not experiencing cannot be resolved. It’s just the way hyperobjects are.

Take oil: nature at its most elemental; black ooze from the depths of the earth. And yet oil is also the stuff of cars, plastic, the Industrial Revolution; it collapses any distinction between nature and not-nature. Driving to the port, we were surrounded by oil and its byproducts—the ooze itself, and the infrastructure that transports it, refines it, holds it, and consumes it—and yet, Morton said, we could never really see the hyperobject of capital-“O” Oil: it shapes our lives but is too big to see.

Since around 2010, Morton has become associated with a philosophical movement known as object-oriented ontology, or O.O.O. The point of O.O.O. is that there is a vast cosmos out there in which weird and interesting shit is happening to all sorts of objects, all the time. In a 1999 lecture, “Object-Oriented Philosophy,” Graham Harman, the movement’s central figure, explained the core idea:

The arena of the world is packed with diverse objects, their forces unleashed and mostly unloved. Red billiard ball smacks green billiard ball. Snowflakes glitter in the light that cruelly annihilates them, while damaged submarines rust along the ocean floor. As flour emerges from mills and blocks of limestone are compressed by earthquakes, gigantic mushrooms spread in the Michigan forest. While human philosophers bludgeon each other over the very possibility of “access” to the world, sharks bludgeon tuna fish and icebergs smash into coastlines…

We are not, as many of the most influential twentieth-century philosophers would have it, trapped within language or mind or culture or anything else. Reality is real, and right there to experience—but it also escapes complete knowability. One must confront reality with the full realization that you’ll always be missing something in the confrontation. Objects are always revealing something, and always concealing something, simply because they are Other. The ethics implied by such a strangely strange world hold that every single object everywhere is real in its own way. This realness cannot be avoided or backed away from. There is no “outside”—just the entire universe of entities constantly interacting, and you are one of them.

… “[Covid-19 is] the ultimate hyperobject,” Morton said. “The hyperobject of our age. It’s literally inside us.” We talked for a bit about fear of the virus—Morton has asthma, and suffers from sleep apnea. “I feel bad for subtitling the hyperobjects book ‘Philosophy and Ecology After the End of the World,’ ” Morton said. “That idea scares people. I don’t mean ‘end of the world’ the way they think I mean it. But why do that to people? Why scare them?”

What Morton means by “the end of the world” is that a world view is passing away. The passing of this world view means that there is no “world” anymore. There’s just an infinite expanse of objects, which have as much power to determine us as we have to determine them. Part of the work of confronting strange strangeness is therefore grappling with fear, sadness, powerlessness, grief, despair. “Somewhere, a bird is singing and clouds pass overhead,” Morton writes, in “Being Ecological,” from 2018. “You stop reading this book and look around you. You don’t have to be ecological. Because you are ecological.” It’s a winsome and terrifying idea. Learning to see oneself as an object among objects is destabilizing—like learning “to navigate through a bad dream.” In many ways, Morton’s project is not philosophical but therapeutic. They have been trying to prepare themselves for the seismic shifts that are coming as the world we thought we knew transforms.

For the philosopher of “hyperobjects”—vast, unknowable things that are bigger than ourselves—the coronavirus is further proof that we live in a dark ecology: “Timothy Morton’s Hyper-Pandemic.”

* “Several thousand years from now, nothing about you as an individual will matter. But what you did will have huge consequences. This is the paradox of the ecological age. And it is why action to change global warming must be massive and collective.” – Timothy Morton, Being Ecological

###

As we find our place, we might send classical birthday greetings to James Clerk Maxwell; he was born on this date in 1831.  A mathematician and and physicist, he calculated (circa 1862) that the speed of propagation of an electromagnetic field is approximately that of the speed of light– kicking off his work in uniting electricity, magnetism, and light… that’s to say, formulating the classical theory of electromagnetic radiation, which is considered the “second great unification in physics” (after the first, realized by Isaac Newton). Though he was the apotheosis of classical (Newtonian) physics, Maxwell laid the foundation for modern physics, starting the search for radio waves and paving the way for such fields as special relativity and quantum mechanics.  In the Millennium Poll – a survey of the 100 most prominent physicists at the turn of the 21st century – Maxwell was voted the third greatest physicist of all time, behind only Newton and Einstein.

225px-James_Clerk_Maxwell

 source

%d bloggers like this: