(Roughly) Daily

Posts Tagged ‘chaos theory

“The older one gets the more convinced one becomes that his Majesty King Chance does three-quarters of the business of this miserable universe”*…

Bockscar en route to Nagasaki, 9 August 1945. US Air Force photo

In an essay adapted from his book Fluke: Chance, Chaos, and Why Everything We Do Matters, Brian Klass argues that social scientists are clinging to simple models of reality – with disastrous results. Instead, he suggests, they must embrace chaos theory…

The social world doesn’t work how we pretend it does. Too often, we are led to believe it is a structured, ordered system defined by clear rules and patterns. The economy, apparently, runs on supply-and-demand curves. Politics is a science. Even human beliefs can be charted, plotted, graphed. And using the right regression we can tame even the most baffling elements of the human condition. Within this dominant, hubristic paradigm of social science, our world is treated as one that can be understood, controlled and bent to our whims. It can’t.

Our history has been an endless but futile struggle to impose order, certainty and rationality onto a Universe defined by disorder, chance and chaos. And, in the 21st century, this tendency seems to be only increasing as calamities in the social world become more unpredictable. From 9/11 to the financial crisis, the Arab Spring to the rise of populism, and from a global pandemic to devastating wars, our modern world feels more prone to disastrous ‘shocks’ than ever before. Though we’ve got mountains of data and sophisticated models, we haven’t gotten much better at figuring out what looms around the corner. Social science has utterly failed to anticipate these bolts from the blue. In fact, most rigorous attempts to understand the social world simply ignore its chaotic quality – writing it off as ‘noise’ – so we can cram our complex reality into neater, tidier models. But when you peer closer at the underlying nature of causality, it becomes impossible to ignore the role of flukes and chance events. Shouldn’t our social models take chaos more seriously?

The problem is that social scientists don’t seem to know how to incorporate the nonlinearity of chaos. For how can disciplines such as psychology, sociology, economics and political science anticipate the world-changing effects of something as small as one consequential day of sightseeing or as ephemeral as passing clouds?

On 30 October 1926, Henry and Mabel Stimsonstepped off a steam train in Kyoto, Japan and set in motion an unbroken chain of events that, two decades later, led to the deaths of 140,000 people in a city more than 300 km away.

The American couple began their short holiday in Japan’s former imperial capital by walking from the railway yard to their room at the nearby Miyako Hotel. It was autumn. The maples had turned crimson, and the ginkgo trees had burst into a golden shade of yellow. Henry chronicled a ‘beautiful day devoted to sightseeing’ in his diary.

Nineteen years later, he had become the Unites States Secretary of War, the chief civilian overseeing military operations in the Second World War, and would soon join a clandestine committee of soldiers and scientists tasked with deciding how to use the first atomic bomb. One Japanese city ticked several boxes: the former imperial capital. The Target Committee agreed that Kyoto must be destroyed. They drew up a tactical bombing map and decided to aim for the city’s railway yard, just around the corner from the Miyako Hotel where the Stimsons had stayed in 1926.

Stimson pleaded with the president Harry Truman not to bomb Kyoto. He sent cables in protest. The generals began referring to Kyoto as Stimson’s ‘pet city’. Eventually, Truman acquiesced, removing Kyoto from the list of targets. On 6 August 1945, Hiroshima was bombed instead.

The next atomic bomb was intended for Kokura, a city at the tip of Japan’s southern island of Kyushu. On the morning of 9 August, three days after Hiroshima was destroyed, six US B-29 bombers were launched, including the strike plane Bockscar. Around 10:45am, Bockscarprepared to release its payload. But, according to the flight log, the target ‘was obscured by heavy ground haze and smoke’. The crew decided not to risk accidentally dropping the atomic bomb in the wrong place.

Bockscar then headed for the secondary target, Nagasaki. But it, too, was obscured. Running low on fuel, the plane prepared to return to base, but a momentary break in the clouds gave the bombardier a clear view of the city. Unbeknown to anyone below, Nagasaki was bombed due to passing clouds over Kokura. To this day, the Japanese refer to ‘Kokura’s luck’ when one unknowingly escapes disaster.

Roughly 200,000 people died in the attacks on Hiroshima and Nagasaki – and not Kyoto and Kokura – largely due to one couple’s vacation two decades earlier and some passing clouds. But if such random events could lead to so many deaths and change the direction of a globally destructive war, how are we to understand or predict the fates of human society? Where, in the models of social change, are we supposed to chart the variables for travel itineraries and clouds?

In the 1970s, the British mathematician George Box quipped that ‘all models are wrong, but some are useful’. But today, many of the models we use to describe our social world are neither right nor useful. There is a better way. And it doesn’t entail a futile search for regular patterns in the maddening complexity of life. Instead, it involves learning to navigate the chaos of our social worlds…

[Klass reviews the history of our attempts to conquer uncertainty, concluding with Edward Norton “Butterfly Effect” Lorenz and what he discovered when he tried to predict the weather…]

… Any error, even a trillionth of a percentage point off on any part of the system, would eventually make any predictions about the future futile. Lorenz had discovered chaos theory.

The core principle of the theory is this: chaotic systems are highly sensitive to initial conditions. That means these systems are fully deterministic but also utterly unpredictable. As Poincaré had anticipated in 1908, small changes in conditions can produce enormous errors. By demonstrating this sensitivity, Lorenz proved Poincaré right.

Chaos theory, to this day, explains why our weather forecasts remain useless beyond a week or two. To predict meteorological changes accurately, we, like Laplace’s demon, would have to be perfect in our understanding of weather systems, and – no matter how advanced our supercomputers may seem – we never will be. Confidence in a predictable future, therefore, is the province of charlatans and fools; or, as the US theologian Pema Chödrön put it: ‘If you’re invested in security and certainty, you are on the wrong planet.’

The second wrinkle in our conception of an ordered, certain world came from the discoveries of quantum mechanics that began in the early 20th century. Seemingly irreducible randomness was discovered in bewildering quantum equations, shifting the dominant scientific conception of our world from determinism to indeterminism (though some interpretations of quantum physics arguably remain compatible with a deterministic universe, such as the ‘many-worlds’ interpretation, Bohmian mechanics, also known as the ‘pilot-wave’ model, and the less prominent theory of superdeterminism). Scientific breakthroughs in quantum physics showed that the unruly nature of the Universe could not be fully explained by either gods or Newtonian physics. The world may be defined, at least in part, by equations that yield inexplicable randomness. And it is not just a partly random world, either. It is startlingly arbitrary…

… How can we make sense of social change when consequential shifts often arise from chaos? This is the untameable bane of social science, a field that tries to detect patterns and assert control over the most unruly, chaotic system that exists in the known Universe: 8 billion interacting human brains embedded in a constantly changing world. While we search for order and patterns, we spend less time focused on an obvious but consequential truth. Flukes matter.

Though some scholars in the 19th century, such as the English philosopher John Stuart Mill and his intellectual descendants, believed there were laws governing human behaviour, social science was swiftly disabused of the notion that a straightforward social physics was possible. Instead, most social scientists have aimed toward what the US sociologist Robert K Merton called ‘middle-range theory’, in which researchers hope to identify regularities and patterns in certain smaller realms that can perhaps later be stitched together to derive the broader theoretical underpinnings of human society. Though some social scientists are sceptical that such broader theoretical underpinnings exist, the most common approach to social science is to use empirical data from the past to tease out ordered patterns that point to stable relationships between causes and effects. Which variables best correlate with the onset of civil wars? Which economic indicators offer the most accurate early warning signs of recessions? What causes democracy?

In the mid-20th century, researchers no longer sought the social equivalent of a physical law (like gravity), but they still looked for ways of deriving clear-cut patterns within the social world. What limited this ability was technology. Just as Lorenz was constrained by the available technology when forecasting weather in the Pacific theatre of the Second World War, so too were social scientists constrained by a lack of computing power. This changed in the 1980s and ’90s, when cheap and sophisticated computers became new tools for understanding social worlds. Suddenly, social scientists – sociologists, economists, psychologists or political scientists – could take a large number of variables and plug them into statistical software packages such as SPSS and Stata, or programming languages such as R. Complex equations would then process these data points, finding the ‘line of best fit’ using a ‘linear regression’, to help explain how groups of humans change over time. A quantitative revolution was born.

By the 2000s, area studies specialists who had previously done their research by trekking across the globe and embedding themselves in specific cultures were largely supplanted by office-bound data junkies who could manipulate numbers and offer evidence of hidden relationships that were obscured prior to the rise of sophisticated numerical analysis. In the process, social science became dominated by one computational tool above all others: linear regressions. To help explain social change, this tool uses past data to try to understand the relationships between variables. A regression produces a simplified equation that tries to fit the cluster of real-world datapoints, while ‘controlling’ for potential confounders, in the hopes of identifying which variables drive change. Using this tool, researchers can feed a model with a seemingly endless string of data as they attempt to answer difficult questions. Does oil hinder democracy? How much does poverty affect political violence? What are the social determinants of crime? With the right data and a linear regression, researchers can plausibly identify patterns with defensible, data-driven equations. This is how much of our knowledge about social systems is currently produced. There is just one glaring problem: our social world isn’t linear. It’s chaotic…

… The deeply flawed assumptions of social modelling do not persist because economists and political scientists are idiots, but rather because the dominant tool for answering social questions has not been meaningfully updated for decades. It is true that some significant improvements have been made since the 1990s. We now have more careful data analysis, better accounting for systematic bias, and more sophisticated methods for inferring causality, as well as new approaches, such as experiments that use randomised control trials. However, these approaches can’t solve many of the lingering problems of tackling complexity and chaos. For example, how would you ethically run an experiment to determine which factors definitively provoke civil wars? And how do you know that an experiment in one place and time would produce a similar result a year later in a different part of the world?

These drawbacks have meant that, despite tremendous innovations in technology, linear regressions remain the outdated king of social research. As the US economist J Doyne Farmer puts it in his book Making Sense of Chaos (2024): ‘The core assumptions of mainstream economics don’t match reality, and the methods based on them don’t scale well from small problems to big problems.’ For Farmer, these methods are primarily limited by technology. They have been, he writes, ‘unable to take full advantage of the huge advances in data and technology.’

The drawbacks also mean that social research often has poor predictive power. And, as a result, social science doesn’t even really try to make predictions. In 2022, Mark Verhagen, a research fellow at the University of Oxford, examined a decade of articles in the top academic journals in a variety of disciplines. Only 12 articles out of 2,414 tried to make predictions in the American Economic Review. For the top political science journal, American Political Science Review, the figure was 4 out of 743. And in the American Journal of Sociology, not a single article made a concrete prediction. This has yielded the bizarre dynamic that many social science models can never be definitively falsified, so some deeply flawed theories linger on indefinitely as zombie ideas that refuse to die.

A core purpose of social science research is to prevent avoidable problems and improve human prosperity. Surely that requires more researchers to make predictions about the world at some point – even if chaos theory shows that those claims are likely to be inaccurate.

We produce too many models that are often wrong and rarely useful. But there is a better way. And it will come from synthesising lessons from fields that social scientists have mostly ignored.

Chaos theory emerged in the 1960s and, in the following decades, mathematical physicists such as David Ruelle and Philip Anderson recognised the significance of Lorenz’s insights for our understanding of real-world dynamical systems. As these ideas spread, misfit thinkers from an array of disciplines began to coalesce around a new way of thinking that was at odds with the mainstream conventions in their own fields. They called it ‘complexity’ or ‘complex systems’ research. For these early thinkers, Mecca was the Santa Fe Institute in New Mexico, not far from the sagebrush-dotted hills where the atomic bomb was born. But unlike Mecca, the Santa Fe Institute did not become the hub of a global movement.

Public interest in chaos and complexity surged in the 1980s and ’90s with the publication of James Gleick’s popular science book Chaos (1987), and a prominent reference from Jeff Goldblum’s character in the film Jurassic Park (1993). ‘The shorthand is the butterfly effect,’ he says, when asked to explain chaos theory. ‘A butterfly can flap its wings in Peking and in Central Park you get rain instead of sunshine.’ But aside from a few fringe thinkers who broke free of disciplinary silos, social science responded to the complexity craze mostly with a shrug. This was a profound error, which has contributed to our flawed understanding of some of the most basic questions about society. Taking chaos and complexity seriously requires a fresh approach.

One alternative to linear regressions is agent-based modelling, a kind of virtual experiment in which computers simulate the behaviour of individual people within a society. This tool allows researchers to see how individual actions, with their own motivations, come together to create larger social patterns. Agent-based modelling has been effective at solving problems that involve relatively straightforward decision-making, such as flows of car traffic or the spread of disease during a pandemic. As these models improve, with advances in computational power, they will inevitably continue to yield actionable insights for more complex social domains. Crucially, agent-based models can capture nonlinear dynamics and emergent phenomena, and reveal unexpected bottlenecks or tipping points that would otherwise go unnoticed. They might allow us to better imagine possible worlds, not just measure patterns from the past. They offer a powerful but underused tool in future-oriented social research involving complex systems.

Additionally, social scientists could incorporate chaotic dynamics by acknowledging the limits of seeking regularities and patterns. Instead, they might try to anticipate and identify systems on the brink, near a consequential tipping point – systems that could be set off by a disgruntled vegetable vendor or triggered by a murdered archduke. The study of ‘self-organised criticality’ in physics and complexity science could help social scientists make sense of this kind of fragility. Proposed by the physicists Per Bak, Chao Tang and Kurt Wiesenfeld, the concept offers a useful analogy for social systems that may disastrously collapse. When a system organises itself toward a critical state, a single fluke could cause the system to change abruptly. By analogy, modern trade networks race toward an optimised but fragile state: a single gust of wind can twist one boat sideways and cause billions of dollars in economic damage, as happened in 2021 when a ship blocked the Suez Canal.

The theory of self-organised criticality was based on the sandpile model, which could be used to evaluate how and why cascades or avalanches occur within systems. If you add grains of sand, one at a time, to a sandpile, eventually, a single grain of sand can cause an avalanche. But that collapse becomes more likely as the sandpile soars to its limit. A social sandpile model could provide a useful intellectual framework for analysing the resilience of complex social systems. Someone lighting themselves on fire, God forbid, in Norway is unlikely to spark a civil war or regime collapse. That is because the Norwegian sandpile is lower, less stretched to its limit, and therefore less prone to unexpected cascades and tipping points than the towering sandpile that led to the Arab Spring.

There are other lessons for social research to be learned from nonlinear evaluations of ecological breakdown. In biology, for instance, the theory of ‘critical slowing down’ predicts that systems near a tipping point – like a struggling coral reef that is being overrun with algae – will take longer to recover from small disturbances. This response seems to act as an early warning system for ecosystems on the brink of collapse.

Social scientists should be drawing on these innovations from complex systems and related fields of research rather than ignoring them. Better efforts to study resilience and fragility in nonlinear systems would drastically improve our ability to avert avoidable catastrophes. And yet, so much social research still chases the outdated dream of distilling the chaotic complexity of our world into a straightforward equation, a simple, ordered representation of a fundamentally disordered world.

When we try to explain our social world, we foolishly ignore the flukes. We imagine that the levers of social change and the gears of history are constrained, not chaotic. We cling to a stripped-down, storybook version of reality, hoping to discover stable patterns. When given the choice between complex uncertainty and comforting – but wrong – certainty, we too often choose comfort.

In truth, we live in an unruly world often governed by chaos. And in that world, the trajectory of our lives, our societies and our histories can forever be diverted by something as small as stepping off a steam train for a beautiful day of sightseeing, or as ephemeral as passing clouds…

Eminently worth reading in full: “The forces of chance,” from @brianklaas in @aeonmag.

* Niccolò Machiavelli, The Prince

###

As we contemplate contingency, we might recall that it was on this date in 1906, at the first International Radiotelegraph Convention in Berlin, that the Morse Code signal “SOS”– “. . . _ _ _ . . .”– became the global standard radio distress signal.  While it was officially replaced in 1999 by the Global Maritime Distress Safety System, SOS is still recognized as a visual distress signal.

SOS has traditionally be “translated” (expanded) to mean “save our ship,” “save our souls,” “send out succor,” or other such pleas.  But while these may be helpful mnemonics, SOS is not an abbreviation or acronym.  Rather, according to the Oxford English Dictionary, the letters were chosen simply because they are easily transmitted in Morse code.

220px-Thesos

source

Written by (Roughly) Daily

November 3, 2024 at 1:00 am

“The greatest value of a picture is when it forces us to notice what we never expected to see”*…

Joseph Priestley’s 1765 A Chart of Biography, Library Company of Philadelphia / University of Oregon 

The breath-takingly broadly talented Joesph Preistley left us much– not least, Alyson Foster explains, a then-new way of understanding history…

It’s a testament to the wide-ranging and unconventional nature of Joseph Priestley’s mind that no one has settled on a term to sum up exactly what he was. The eighteenth-century British polymath has been described as, among other things, a historian, a chemist, an educator, a philosopher, a theologian, and a political radical who became, for a period of time, the most despised person in England. Priestley’s many contradictions—as a rationalist Unitarian millenarian, as a mild-mannered controversialist, as a thinker who was both ahead of his time and behind it—have provided endless fodder for the historians who have debated the precise nature of his legacy and his place among his fellow Enlightenment intellectuals. But his contributions—however they are categorized—have continued to live on in subtle and surprisingly enduring ways, more than two hundred years after his death, at the age of seventy, in rural Pennsylvania.

Take, for example, A Chart of Biography, which is considered to be the first modern timeline. This unusual, and unusually beautiful, pedagogical tool, which was published by Priestley in 1765, while he was in his thirties and working as a tutor at an academy in Warrington, England, tends to get lost in the shuffle of Priestley’s more notable achievements—his seminal 1761 textbook on language, The Rudiments of English Grammar, say, or his discovery of nine gases, including oxygen, 13 years later. But the chart, along with its companion, A New Chart of History, which Priestley published four years later, has become a curious subject of interest among data visualization aficionados who have analyzed its revolutionary design in academic papers and added it to Internet lists of notable infographics. Recently, both charts have become the focus of an NEH-supported digital humanities project, Chronographics: The Time Charts of Joseph Priestley, produced by scholars at the University of Oregon. 

Even those of us ignorant of (or uninterested in) infographics can look at the painstakingly detailed Chart of Biography for a moment or two and appreciate how it has become a source of fascination. The two-foot-by-three-foot, pastel-striped paper scroll—which contains the meticulously inscribed names of approximately 2,000 poets, artists, statesmen, and other famous historical figures dating back three millennia—is visually striking, combining a formal, somewhat ornate eighteenth-century aesthetic with the precise organization of a schematic. Every single one of the chart’s subjects is grouped vertically into one of six occupational categories, then plotted out chronologically along a horizonal line divided into ten-year increments. Despite the huge quantity of information it contains, it is extremely user-friendly. Any one of Priestley’s history students could run his eye across the chart and immediately gain a sense of the temporal lay of the land. Who came first: Copernicus or Newton? How many centuries separate Genghis Khan from Joan of Arc? Which artists were working during the reign of Henry VIII? The chart was a masterful blend of form and function…

The most significant design feature of Priestley’s chart—as historians point out—was the way in which he linked units of time to units of distance on the page, similar to the way a cartographer uses scale when creating a map. (The artist Pietro Lorenzetti lived two hundred years before Titian and thus is situated twice as far from Titian as Jan van Eyck, who predated Titian by about a century.) If this innovation is hard for contemporary viewers to fully appreciate, it’s probably because Priestley’s representation of time has become a convention that’s used everywhere in visual design and seems so obvious it’s now taken for granted.

To Priestley’s contemporaries, though, who were accustomed to cumbersome Eusebian-style [see here] chronological tables or the visually striking but often obscure “stream charts” created by the era’s chronographers, Priestley’s method of capturing time on the page revealed something revelatory and new—a way of seeing historical patterns and connections that would have otherwise remained hidden. “To many readers,” wrote Daniel Rosenberg and Anthony Grafton in their book, Cartographies of Time, Priestley’s Chart of Biography offered a never-before-seen “picture of time itself.”  

It was no wonder, then, that eighteenth-century readers found themselves drawn to it. A Chart of Biography sold well in both England and the United States, accruing many fans along the way. Along with the New Chart of History, it would go on to be printed in at least 19 editions and spawn numerous imitations, including one by Priestley’s future friend Thomas Jefferson, who developed his own “time chart” of market seasons in Washington, and the historian David Ramsay, who acknowledged Priestley’s influence in his Historical and Biographical Chart of the United States. The time charts marked Priestley’s first major commercial success and played a key role in establishing his reputation as a serious intellectual, earning him an honorary degree from the University of Edinburgh, and helping him secure a fellowship nomination to the Royal Society of London.

As much as anything he published, and he published a staggering amount—somewhere between 150 and 200 books, articles, papers, and pamphlets—Priestley’s time charts encapsulate his uniqueness as a thinker. Of his many intellectual gifts, his gift for synthesis—for knitting together the seemingly disparate things that caught his attention—might have been his greatest… 

Read on for how Priestley went on to become the most controversial man in England: “Joseph Priestley Created Revolutionary ‘Maps’ of Time,” by @alysonafoster in @humanitiesmag from @NEHgov.

More info on the Chart– and magnified views– here.

John Tukey

###

As we celebrate constructive charts, we might spare a thought for Edward Lorenz, a mathematician and meteorologist, best remembered as a pioneer of Chaos Theory; he died on this date in 2008. Having noticed that his computer weather simulation gave wildly different results from even tiny changes in the input data, he began investigating a phenomenon that he famously outlined in a 1963 paper— and that came to be known as the “butterfly effect,” that the flap of a butterfly’s wings could ultimately determine the weather thousands of miles away and days later… generalized in Chaos Theory to state that “slightly differing initial states can evolve into considerably different [later] states.”

source

Written by (Roughly) Daily

April 16, 2024 at 1:00 am

“Everybody wants to build and nobody wants to do maintenance”*…

 

high-cost-of-deferred-maintenance

 

The most unappreciated and undervalued forms of technological labour are also the most ordinary: those who repair and maintain technologies that already exist, that were ‘innovated’ long ago. This shift in emphasis involves focusing on the constant processes of entropy and un-doing – which the media scholar Steven Jackson calls ‘broken world thinking’ – and the work we do to slow or halt them, rather than on the introduction of novel things…

We can think of labour that goes into maintenance and repair as the work of the maintainers, those individuals whose work keeps ordinary existence going rather than introducing novel things. Brief reflection demonstrates that the vast majority of human labour, from laundry and trash removal to janitorial work and food preparation, is of this type: upkeep. This realisation has significant implications for gender relations in and around technology. Feminist theorists have long argued that obsessions with technological novelty obscures all of the labour, including housework, that women, disproportionately, do to keep life on track. Domestic labour has huge financial ramifications but largely falls outside economic accounting, like Gross Domestic Product. In her classic 1983 book, More Work for Mother, Ruth Schwartz Cowan examined home technologies – such as washing machines and vacuum cleaners – and how they fit into women’s ceaseless labour of domestic upkeep. One of her more famous findings was that new housekeeping technologies, which promised to save labour, literally created more work for mother as cleanliness standards rose, leaving women perpetually unable to keep up.

Nixon, wrong about so many things, also was wrong to point to household appliances as self-evident indicators of American progress. Ironically, Cowan’s work first met with scepticism among male scholars working in the history of technology, whose focus was a male pantheon of inventors: Bell, Morse, Edison, Tesla, Diesel, Shockley, and so on. A renewed focus on maintenance and repair also has implications beyond the gender politics that More Work for Mother brought to light. When they set innovation-obsession to the side, scholars can confront various kinds of low-wage labour performed by many African-Americans, Latinos, and other racial and ethnic minorities. From this perspective, recent struggles over increasing the minimum wage, including for fast food workers, can be seen as arguments for the dignity of being a maintainer…

Entire societies have come to talk about innovation as if it were an inherently desirable value, like love, fraternity, courage, beauty, dignity, or responsibility. Innovation-speak worships at the altar of change, but it rarely asks who benefits, to what end? A focus on maintenance provides opportunities to ask questions about what we really want out of technologies. What do we really care about? What kind of society do we want to live in? Will this help get us there? We must shift from means, including the technologies that underpin our everyday actions, to ends, including the many kinds of social beneficence and improvement that technology can offer. Our increasingly unequal and fearful world would be grateful…

Capitalism excels at innovation but is failing at maintenance, and for most lives it is maintenance that matters more: “Hail the maintainers.”

[image above: source]

* Kurt Vonnegut

###

As we invest in infrastructure, we might send carefully-calculated birthday greetings to Jules Henri Poincaré; he was born on this date in 1854.  A mathematician, theoretical physicist, engineer, and a philosopher of science, Poincaré is considered the “last Universalist” in math– the last mathematician to excel in all fields of the discipline as it existed during his lifetime.

Poincaré was a co-discoverer (with Einstein and Lorentz) of the special theory of relativity; he laid the foundations for the fields of topology and chaos theory; and he had a huge impact on cosmogony.  His famous “Conjecture” held that if any loop in a given three-dimensional space can be shrunk to a point, the space is equivalent to a sphere; it remained unsolved until Grigori Perelman completed a proof in 2003.

source

And we might also send amusingly-phrased birthday greetings to Ludwig Josef Johann Wittgenstein; the philospher of logic, math, language, and the mind was born on this date in 1889.

220px-35._Portrait_of_Wittgenstein source

 

 

 

Written by (Roughly) Daily

April 29, 2020 at 1:01 am

“It’s tough to make predictions, especially about the future”*…

 

prediction

 

As astrophysicist Mario Livo recounts in Brilliant Blunders, in April 1900, the eminent physicist Lord Kelvin proclaimed that our understanding of the cosmos was complete except for two “clouds”—minor details still to be worked out. Those clouds had to do with radiation emissions and with the speed of light… and they pointed the way to two major revolutions in physics: quantum mechanics and the theory of relativity.  Prediction is hard; ironically, it’s especially hard for experts attempting foresight in their own fields…

The idea for the most important study ever conducted of expert predictions was sparked in 1984, at a meeting of a National Research Council committee on American-Soviet relations. The psychologist and political scientist Philip E. Tetlock was 30 years old, by far the most junior committee member. He listened intently as other members discussed Soviet intentions and American policies. Renowned experts delivered authoritative predictions, and Tetlock was struck by how many perfectly contradicted one another and were impervious to counterarguments.

Tetlock decided to put expert political and economic predictions to the test. With the Cold War in full swing, he collected forecasts from 284 highly educated experts who averaged more than 12 years of experience in their specialties. To ensure that the predictions were concrete, experts had to give specific probabilities of future events. Tetlock had to collect enough predictions that he could separate lucky and unlucky streaks from true skill. The project lasted 20 years, and comprised 82,361 probability estimates about the future.

The result: The experts were, by and large, horrific forecasters. Their areas of specialty, years of experience, and (for some) access to classified information made no difference. They were bad at short-term forecasting and bad at long-term forecasting. They were bad at forecasting in every domain. When experts declared that future events were impossible or nearly impossible, 15 percent of them occurred nonetheless. When they declared events to be a sure thing, more than one-quarter of them failed to transpire. As the Danish proverb warns, “It is difficult to make predictions, especially about the future.”…

One subgroup of scholars, however, did manage to see more of what was coming… they were not vested in a single discipline. They took from each argument and integrated apparently contradictory worldviews…

The integrators outperformed their colleagues in pretty much every way, but especially trounced them on long-term predictions. Eventually, Tetlock bestowed nicknames (borrowed from the philosopher Isaiah Berlin) on the experts he’d observed: The highly specialized hedgehogs knew “one big thing,” while the integrator foxes knew “many little things.”…

Credentialed authorities are comically bad at predicting the future. But reliable– at least more reliable– forecasting is possible: “The Peculiar Blindness of Experts.”

See Tetlock discuss his findings at a Long Now Seminar.  Read Berlin’s riff on Archilochus, “The Hedgehog and the Fox,” here.

* Yogi Berra

###

As we ponder prediction, we might send complicating birthday greetings to Edward Norton Lorenz; he was born on this date in 1917.  A mathematician who turned to meteorology during World War II, he established the theoretical basis of weather and climate predictability, as well as the basis for computer-aided atmospheric physics and meteorology.

But he is probably better remembered as the founder of modern chaos theory, a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions… and thus practically impossible to predict in detail with certainty.

In 1961, Lorenz was using a simple digital computer, a Royal McBee LGP-30, to simulate weather patterns by modeling 12 variables, representing things like temperature and wind speed. He wanted to see a sequence of data again, and to save time he started the simulation in the middle of its course. He did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To his surprise, the weather that the machine began to predict was completely different from the previous calculation. The culprit: a rounded decimal number on the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. His work on the topic culminated in the publication of his 1963 paper “Deterministic Nonperiodic Flow” in Journal of the Atmospheric Sciences, and with it, the foundation of chaos theory…

His description of the butterfly effect, the idea that small changes can have large consequences, followed in 1969.

lorenz source

 

Totally random, man!…

source

Edward Lorenz, a pioneer of Chaos Theory, famously observed in a 1963 paper that the flap of a butterfly’s wings could ultimately determine the weather thousands of miles away and days later.

Now, thanks for the ever-extraordinary Exploratorium, readers can simulate their own butterflies, and watch them interact with “strange attractors.”

Try it here.

As we sidle up to the stochastic, we might recall that it was on this date in 1873 that Samuel Clemens (AKA Mark Twain) received a U.S. patent (No. 140,245) for a self-pasting scrapbook– which was popular enough ultimately to sell 25,000 copies.  Two years earlier the innovative author had received his first patent– for “An Improvement in Adjustable and Detachable Garment Straps” (No.121,992– used for shirts, underpants, and women’s corsets).  Later (in 1885) he patented a history trivia game.

The Self-Pasting Scrapbook (source)