(Roughly) Daily

Posts Tagged ‘foresight

“The older one gets the more convinced one becomes that his Majesty King Chance does three-quarters of the business of this miserable universe”*…

Bockscar en route to Nagasaki, 9 August 1945. US Air Force photo

In an essay adapted from his book Fluke: Chance, Chaos, and Why Everything We Do Matters, Brian Klass argues that social scientists are clinging to simple models of reality – with disastrous results. Instead, he suggests, they must embrace chaos theory…

The social world doesn’t work how we pretend it does. Too often, we are led to believe it is a structured, ordered system defined by clear rules and patterns. The economy, apparently, runs on supply-and-demand curves. Politics is a science. Even human beliefs can be charted, plotted, graphed. And using the right regression we can tame even the most baffling elements of the human condition. Within this dominant, hubristic paradigm of social science, our world is treated as one that can be understood, controlled and bent to our whims. It can’t.

Our history has been an endless but futile struggle to impose order, certainty and rationality onto a Universe defined by disorder, chance and chaos. And, in the 21st century, this tendency seems to be only increasing as calamities in the social world become more unpredictable. From 9/11 to the financial crisis, the Arab Spring to the rise of populism, and from a global pandemic to devastating wars, our modern world feels more prone to disastrous ‘shocks’ than ever before. Though we’ve got mountains of data and sophisticated models, we haven’t gotten much better at figuring out what looms around the corner. Social science has utterly failed to anticipate these bolts from the blue. In fact, most rigorous attempts to understand the social world simply ignore its chaotic quality – writing it off as ‘noise’ – so we can cram our complex reality into neater, tidier models. But when you peer closer at the underlying nature of causality, it becomes impossible to ignore the role of flukes and chance events. Shouldn’t our social models take chaos more seriously?

The problem is that social scientists don’t seem to know how to incorporate the nonlinearity of chaos. For how can disciplines such as psychology, sociology, economics and political science anticipate the world-changing effects of something as small as one consequential day of sightseeing or as ephemeral as passing clouds?

On 30 October 1926, Henry and Mabel Stimsonstepped off a steam train in Kyoto, Japan and set in motion an unbroken chain of events that, two decades later, led to the deaths of 140,000 people in a city more than 300 km away.

The American couple began their short holiday in Japan’s former imperial capital by walking from the railway yard to their room at the nearby Miyako Hotel. It was autumn. The maples had turned crimson, and the ginkgo trees had burst into a golden shade of yellow. Henry chronicled a ‘beautiful day devoted to sightseeing’ in his diary.

Nineteen years later, he had become the Unites States Secretary of War, the chief civilian overseeing military operations in the Second World War, and would soon join a clandestine committee of soldiers and scientists tasked with deciding how to use the first atomic bomb. One Japanese city ticked several boxes: the former imperial capital. The Target Committee agreed that Kyoto must be destroyed. They drew up a tactical bombing map and decided to aim for the city’s railway yard, just around the corner from the Miyako Hotel where the Stimsons had stayed in 1926.

Stimson pleaded with the president Harry Truman not to bomb Kyoto. He sent cables in protest. The generals began referring to Kyoto as Stimson’s ‘pet city’. Eventually, Truman acquiesced, removing Kyoto from the list of targets. On 6 August 1945, Hiroshima was bombed instead.

The next atomic bomb was intended for Kokura, a city at the tip of Japan’s southern island of Kyushu. On the morning of 9 August, three days after Hiroshima was destroyed, six US B-29 bombers were launched, including the strike plane Bockscar. Around 10:45am, Bockscarprepared to release its payload. But, according to the flight log, the target ‘was obscured by heavy ground haze and smoke’. The crew decided not to risk accidentally dropping the atomic bomb in the wrong place.

Bockscar then headed for the secondary target, Nagasaki. But it, too, was obscured. Running low on fuel, the plane prepared to return to base, but a momentary break in the clouds gave the bombardier a clear view of the city. Unbeknown to anyone below, Nagasaki was bombed due to passing clouds over Kokura. To this day, the Japanese refer to ‘Kokura’s luck’ when one unknowingly escapes disaster.

Roughly 200,000 people died in the attacks on Hiroshima and Nagasaki – and not Kyoto and Kokura – largely due to one couple’s vacation two decades earlier and some passing clouds. But if such random events could lead to so many deaths and change the direction of a globally destructive war, how are we to understand or predict the fates of human society? Where, in the models of social change, are we supposed to chart the variables for travel itineraries and clouds?

In the 1970s, the British mathematician George Box quipped that ‘all models are wrong, but some are useful’. But today, many of the models we use to describe our social world are neither right nor useful. There is a better way. And it doesn’t entail a futile search for regular patterns in the maddening complexity of life. Instead, it involves learning to navigate the chaos of our social worlds…

[Klass reviews the history of our attempts to conquer uncertainty, concluding with Edward Norton “Butterfly Effect” Lorenz and what he discovered when he tried to predict the weather…]

… Any error, even a trillionth of a percentage point off on any part of the system, would eventually make any predictions about the future futile. Lorenz had discovered chaos theory.

The core principle of the theory is this: chaotic systems are highly sensitive to initial conditions. That means these systems are fully deterministic but also utterly unpredictable. As Poincaré had anticipated in 1908, small changes in conditions can produce enormous errors. By demonstrating this sensitivity, Lorenz proved Poincaré right.

Chaos theory, to this day, explains why our weather forecasts remain useless beyond a week or two. To predict meteorological changes accurately, we, like Laplace’s demon, would have to be perfect in our understanding of weather systems, and – no matter how advanced our supercomputers may seem – we never will be. Confidence in a predictable future, therefore, is the province of charlatans and fools; or, as the US theologian Pema Chödrön put it: ‘If you’re invested in security and certainty, you are on the wrong planet.’

The second wrinkle in our conception of an ordered, certain world came from the discoveries of quantum mechanics that began in the early 20th century. Seemingly irreducible randomness was discovered in bewildering quantum equations, shifting the dominant scientific conception of our world from determinism to indeterminism (though some interpretations of quantum physics arguably remain compatible with a deterministic universe, such as the ‘many-worlds’ interpretation, Bohmian mechanics, also known as the ‘pilot-wave’ model, and the less prominent theory of superdeterminism). Scientific breakthroughs in quantum physics showed that the unruly nature of the Universe could not be fully explained by either gods or Newtonian physics. The world may be defined, at least in part, by equations that yield inexplicable randomness. And it is not just a partly random world, either. It is startlingly arbitrary…

… How can we make sense of social change when consequential shifts often arise from chaos? This is the untameable bane of social science, a field that tries to detect patterns and assert control over the most unruly, chaotic system that exists in the known Universe: 8 billion interacting human brains embedded in a constantly changing world. While we search for order and patterns, we spend less time focused on an obvious but consequential truth. Flukes matter.

Though some scholars in the 19th century, such as the English philosopher John Stuart Mill and his intellectual descendants, believed there were laws governing human behaviour, social science was swiftly disabused of the notion that a straightforward social physics was possible. Instead, most social scientists have aimed toward what the US sociologist Robert K Merton called ‘middle-range theory’, in which researchers hope to identify regularities and patterns in certain smaller realms that can perhaps later be stitched together to derive the broader theoretical underpinnings of human society. Though some social scientists are sceptical that such broader theoretical underpinnings exist, the most common approach to social science is to use empirical data from the past to tease out ordered patterns that point to stable relationships between causes and effects. Which variables best correlate with the onset of civil wars? Which economic indicators offer the most accurate early warning signs of recessions? What causes democracy?

In the mid-20th century, researchers no longer sought the social equivalent of a physical law (like gravity), but they still looked for ways of deriving clear-cut patterns within the social world. What limited this ability was technology. Just as Lorenz was constrained by the available technology when forecasting weather in the Pacific theatre of the Second World War, so too were social scientists constrained by a lack of computing power. This changed in the 1980s and ’90s, when cheap and sophisticated computers became new tools for understanding social worlds. Suddenly, social scientists – sociologists, economists, psychologists or political scientists – could take a large number of variables and plug them into statistical software packages such as SPSS and Stata, or programming languages such as R. Complex equations would then process these data points, finding the ‘line of best fit’ using a ‘linear regression’, to help explain how groups of humans change over time. A quantitative revolution was born.

By the 2000s, area studies specialists who had previously done their research by trekking across the globe and embedding themselves in specific cultures were largely supplanted by office-bound data junkies who could manipulate numbers and offer evidence of hidden relationships that were obscured prior to the rise of sophisticated numerical analysis. In the process, social science became dominated by one computational tool above all others: linear regressions. To help explain social change, this tool uses past data to try to understand the relationships between variables. A regression produces a simplified equation that tries to fit the cluster of real-world datapoints, while ‘controlling’ for potential confounders, in the hopes of identifying which variables drive change. Using this tool, researchers can feed a model with a seemingly endless string of data as they attempt to answer difficult questions. Does oil hinder democracy? How much does poverty affect political violence? What are the social determinants of crime? With the right data and a linear regression, researchers can plausibly identify patterns with defensible, data-driven equations. This is how much of our knowledge about social systems is currently produced. There is just one glaring problem: our social world isn’t linear. It’s chaotic…

… The deeply flawed assumptions of social modelling do not persist because economists and political scientists are idiots, but rather because the dominant tool for answering social questions has not been meaningfully updated for decades. It is true that some significant improvements have been made since the 1990s. We now have more careful data analysis, better accounting for systematic bias, and more sophisticated methods for inferring causality, as well as new approaches, such as experiments that use randomised control trials. However, these approaches can’t solve many of the lingering problems of tackling complexity and chaos. For example, how would you ethically run an experiment to determine which factors definitively provoke civil wars? And how do you know that an experiment in one place and time would produce a similar result a year later in a different part of the world?

These drawbacks have meant that, despite tremendous innovations in technology, linear regressions remain the outdated king of social research. As the US economist J Doyne Farmer puts it in his book Making Sense of Chaos (2024): ‘The core assumptions of mainstream economics don’t match reality, and the methods based on them don’t scale well from small problems to big problems.’ For Farmer, these methods are primarily limited by technology. They have been, he writes, ‘unable to take full advantage of the huge advances in data and technology.’

The drawbacks also mean that social research often has poor predictive power. And, as a result, social science doesn’t even really try to make predictions. In 2022, Mark Verhagen, a research fellow at the University of Oxford, examined a decade of articles in the top academic journals in a variety of disciplines. Only 12 articles out of 2,414 tried to make predictions in the American Economic Review. For the top political science journal, American Political Science Review, the figure was 4 out of 743. And in the American Journal of Sociology, not a single article made a concrete prediction. This has yielded the bizarre dynamic that many social science models can never be definitively falsified, so some deeply flawed theories linger on indefinitely as zombie ideas that refuse to die.

A core purpose of social science research is to prevent avoidable problems and improve human prosperity. Surely that requires more researchers to make predictions about the world at some point – even if chaos theory shows that those claims are likely to be inaccurate.

We produce too many models that are often wrong and rarely useful. But there is a better way. And it will come from synthesising lessons from fields that social scientists have mostly ignored.

Chaos theory emerged in the 1960s and, in the following decades, mathematical physicists such as David Ruelle and Philip Anderson recognised the significance of Lorenz’s insights for our understanding of real-world dynamical systems. As these ideas spread, misfit thinkers from an array of disciplines began to coalesce around a new way of thinking that was at odds with the mainstream conventions in their own fields. They called it ‘complexity’ or ‘complex systems’ research. For these early thinkers, Mecca was the Santa Fe Institute in New Mexico, not far from the sagebrush-dotted hills where the atomic bomb was born. But unlike Mecca, the Santa Fe Institute did not become the hub of a global movement.

Public interest in chaos and complexity surged in the 1980s and ’90s with the publication of James Gleick’s popular science book Chaos (1987), and a prominent reference from Jeff Goldblum’s character in the film Jurassic Park (1993). ‘The shorthand is the butterfly effect,’ he says, when asked to explain chaos theory. ‘A butterfly can flap its wings in Peking and in Central Park you get rain instead of sunshine.’ But aside from a few fringe thinkers who broke free of disciplinary silos, social science responded to the complexity craze mostly with a shrug. This was a profound error, which has contributed to our flawed understanding of some of the most basic questions about society. Taking chaos and complexity seriously requires a fresh approach.

One alternative to linear regressions is agent-based modelling, a kind of virtual experiment in which computers simulate the behaviour of individual people within a society. This tool allows researchers to see how individual actions, with their own motivations, come together to create larger social patterns. Agent-based modelling has been effective at solving problems that involve relatively straightforward decision-making, such as flows of car traffic or the spread of disease during a pandemic. As these models improve, with advances in computational power, they will inevitably continue to yield actionable insights for more complex social domains. Crucially, agent-based models can capture nonlinear dynamics and emergent phenomena, and reveal unexpected bottlenecks or tipping points that would otherwise go unnoticed. They might allow us to better imagine possible worlds, not just measure patterns from the past. They offer a powerful but underused tool in future-oriented social research involving complex systems.

Additionally, social scientists could incorporate chaotic dynamics by acknowledging the limits of seeking regularities and patterns. Instead, they might try to anticipate and identify systems on the brink, near a consequential tipping point – systems that could be set off by a disgruntled vegetable vendor or triggered by a murdered archduke. The study of ‘self-organised criticality’ in physics and complexity science could help social scientists make sense of this kind of fragility. Proposed by the physicists Per Bak, Chao Tang and Kurt Wiesenfeld, the concept offers a useful analogy for social systems that may disastrously collapse. When a system organises itself toward a critical state, a single fluke could cause the system to change abruptly. By analogy, modern trade networks race toward an optimised but fragile state: a single gust of wind can twist one boat sideways and cause billions of dollars in economic damage, as happened in 2021 when a ship blocked the Suez Canal.

The theory of self-organised criticality was based on the sandpile model, which could be used to evaluate how and why cascades or avalanches occur within systems. If you add grains of sand, one at a time, to a sandpile, eventually, a single grain of sand can cause an avalanche. But that collapse becomes more likely as the sandpile soars to its limit. A social sandpile model could provide a useful intellectual framework for analysing the resilience of complex social systems. Someone lighting themselves on fire, God forbid, in Norway is unlikely to spark a civil war or regime collapse. That is because the Norwegian sandpile is lower, less stretched to its limit, and therefore less prone to unexpected cascades and tipping points than the towering sandpile that led to the Arab Spring.

There are other lessons for social research to be learned from nonlinear evaluations of ecological breakdown. In biology, for instance, the theory of ‘critical slowing down’ predicts that systems near a tipping point – like a struggling coral reef that is being overrun with algae – will take longer to recover from small disturbances. This response seems to act as an early warning system for ecosystems on the brink of collapse.

Social scientists should be drawing on these innovations from complex systems and related fields of research rather than ignoring them. Better efforts to study resilience and fragility in nonlinear systems would drastically improve our ability to avert avoidable catastrophes. And yet, so much social research still chases the outdated dream of distilling the chaotic complexity of our world into a straightforward equation, a simple, ordered representation of a fundamentally disordered world.

When we try to explain our social world, we foolishly ignore the flukes. We imagine that the levers of social change and the gears of history are constrained, not chaotic. We cling to a stripped-down, storybook version of reality, hoping to discover stable patterns. When given the choice between complex uncertainty and comforting – but wrong – certainty, we too often choose comfort.

In truth, we live in an unruly world often governed by chaos. And in that world, the trajectory of our lives, our societies and our histories can forever be diverted by something as small as stepping off a steam train for a beautiful day of sightseeing, or as ephemeral as passing clouds…

Eminently worth reading in full: “The forces of chance,” from @brianklaas in @aeonmag.

* Niccolò Machiavelli, The Prince

###

As we contemplate contingency, we might recall that it was on this date in 1906, at the first International Radiotelegraph Convention in Berlin, that the Morse Code signal “SOS”– “. . . _ _ _ . . .”– became the global standard radio distress signal.  While it was officially replaced in 1999 by the Global Maritime Distress Safety System, SOS is still recognized as a visual distress signal.

SOS has traditionally be “translated” (expanded) to mean “save our ship,” “save our souls,” “send out succor,” or other such pleas.  But while these may be helpful mnemonics, SOS is not an abbreviation or acronym.  Rather, according to the Oxford English Dictionary, the letters were chosen simply because they are easily transmitted in Morse code.

220px-Thesos

source

Written by (Roughly) Daily

November 3, 2024 at 1:00 am

“We are saved by making the future present to ourselves”*…

Recently, Steven Johnson (and here) received the Pioneer Award in Positive Psychology from UPenn’s Positive Psychology Center. Presented by his friend and mentor Marty Seligman, it honored Johnson’s “work over the years advancing the cause of human flourishing.”

From his acceptance speech…

… I’ve always been drawn to… long-term perspectives, where you position yourself… in the larger context of hundreds or thousands of years of human suffering and progress. Some of my California friends even built an entire organization to celebrate that long-term view: the Long Now Foundation, which is dedicated to thinking on the scale of centuries or millennia, encouraging us to get out of the 24-hour news cycle that dominates so much of our lives today. A technologically advanced culture cannot flourish without getting better at anticipating the future. That’s why science fiction matters. That’s why scenario planning matters. That’s why complex software simulations that enable us to forecast things like climate change on the scale of decades matter. 

And here I want to bring us back to another idea that Marty Seligman has been an advocate for. Almost ten years ago, he edited a collection of essays called Homo Prospectus which had a huge influence on my thinking about the world. The core idea behind that book was that a defining superpower of human beings is our ability to mentally time-travel to possible future states, and think about how we might organize our activities to arrive at those imagined future outcomes. 

“What best distinguishes our species,” he wrote in the introduction to that book, “is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain.” 

It is unclear whether nonhuman animals have any real concept of the future at all. Some organisms display behavior that has long-term consequences, like a squirrel’s burying a nut for winter, but those behaviors are all instinctive. The latest studies of animal cognition suggest that some primates and birds may carry out deliberate preparations for events that will occur in the near future. But making decisions based on future prospects on the scale of months or years — even something as simple as planning a gathering of the tribe a week from now — would be unimaginable even to our closest primate relatives. If the Homo prospectus theory is correct, those limited time-traveling skills explain an important piece of the technological gap that separates humans from all other species on the planet. It’s a lot easier to invent a new tool if you can imagine a future where that tool might be useful. What gave flight to the human mind and all its inventiveness may not have been the usual culprits of our opposable thumbs or our gift for language. It may, instead, have been freeing our minds from the tyranny of the present.

The problem now is that the future is getting increasingly hard to predict, in large part because of what has started to happen with artificial intelligence over the past few years. I’ve spent a lot of my career looking at transformative changes in technology, and I’ve come to believe that what we’re experiencing right now is going to be the most seismic, the most far-reaching transformation of my lifetime, bigger than the personal computer, bigger than the Internet and the Web. And while there is much to debate about what the impact of this revolution is going to be for the job market, for politics, and just about any other field, there is growing consensus that it is going to provide an enormous lift to medicine and human health. The Nobel Prize for chemistry going to the AlphaFold team last week was arguably the most dramatic illustration of the promise here. Earliest this month, Dario Amodei—the founder of the AI lab Anthropic, makers of Claude–published a 13,000 word piece on where he thought we were headed with what he calls “powerful AI” in the next decade or two. The line that really struck me in the piece was this:

My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years… a compressed 21st century.

Whether or not something that dramatic does come to pass—and I think we have to take the possibility of it seriously—it seems clear that given the kind of biological and medical advances that AI will likely unlock, there is significant headroom left in the story of extended human lifespan, perhaps even a sea change in how we age. That is, on one level, incredibly hopeful news. But it is also the kind of change that will inevitably have enormous secondary effects. To understand just how momentous those changes could be, take a look at this chart:

That’s the 6,000 year history of human population growth. You might notice, if you really squint your eyes, that something interesting appears to happen about 150 years ago. After millennia of slow and steady growth, human population growth went exponential. And that’s not the result of people having more babies—the human birth rate was declining rapidly during much of that period. That’s the impact of people not dying. And while that is on one level incredibly good news, it is also in a very real sense one of the two most important drivers of climate change. If we had transferred to a fossil-fuel-based economy but kept our population at 1850 levels, we would have no climate change issues whatsoever—there simply wouldn’t be enough carbon-emitting lifestyles to make a measurable difference in the atmosphere.

The key idea here is that no change this momentous is entirely positive in its downstream effects. Trying to anticipate those effects, and mitigate the negative ones, is going to take all of our powers of prospection. 

When I was putting together my thoughts for this talk, my mind went back to the one time I spoke with Marty, about five years ago, when I was writing about cognitive time travel for the Times Magazine. As usual, I was incredibly behind in actually doing the reporting for the piece, and I’d called Marty desperate for a few quotes on a tight deadline. He very generously found time for me, but he had to do the call from an animal hospital, because as it happens he and his family were in the middle of putting their dog down. So our very first moments in conversation with each other plunged right into the depths of loss and grieving and the strange bonds that form between animals and humans. There was no small talk. 

As I said earlier, death is, in the most basic sense, the termination point of human flourishing. But it’s also the shadow that hovers over us while we are still alive. We have done so much to minimize that shadow over the past century or two, going from a world where it was the norm for a third of your children to die before adulthood to a world where less than one percent do. But what does it mean for human flourishing if that runaway life expectancy curve that we’ve been riding for the past century keeps ascending? What does it mean if AI starts out-performing us at complex cognitive tasks? How do we flourish in that brave new world? Do we take on a new responsibility—not just ensuring the path of human flourishing, but also the flourishing of our AI companions? These are all difficult questions precisely because of time. The rate of change is so extreme right now we don’t have as much time to learn, and adapt. The doubling of human life expectancy was a process that really unfolded over two hundred years, and we’re still dealing with its unintended consequences. What happens if that magnitude of change gets compressed down to a decade?

I don’t know the answers to those questions yet, I’m sorry to report. But maybe spelling them out together helps explain something about what I’ve tried to do with my career, which I think from afar can sometimes seem a bit random, bouncing back and forth between writing about long-term decision making or exploring the history of human life expectancy and building software with language models. This award is called the Pioneer Award, and while I’m deeply honored to receive it, I don’t think of myself so much as a pioneer in any of these fields, but rather as someone who has consistently tried to find a place to work that was adjacent to the most important trends in human flourishing, so that I could help shine light on them, explain them to a wider audience, and in the case of my work with AI, nudge them in a positive direction to the best of my ability. That you all have recognized me for this work—pioneer or not—means an enormous amount to me. You can be sure I will do my best to savor it…

On progress, the “compressed 21st century,” and the importance of foresight: “Ways of Flourishing,” from @stevenbjohnson in his newsletter Adjacent Possible. Eminently worth reading in full.

(Image above: source)

* George Eliot

###

As we take the long view, we might recall that it was on this date in 1873 that Illinois farmer Joseph F. Glidden applied for a patent on barbed wire. It became the first commercially-feasible barbed wire in 1874 (an earlier, less successful patent preceded his)– a product that would transform the West. Before his innovation, settlers on the treeless plains had no easy way to fence livestock away from cropland, and ranchers had no way to prevent their herds from roaming far and wide. Glidden’s barbed wire opened the plains to large-scale farming, and closed the open range, bringing the era of the cowboy and the round-up to an end. With his partner, Isaac L. Ellwood, Glidden formed the Barb Fence Company of De Kalb, Illinois, and quickly became one of the wealthiest men in the nation.

source

“It is difficult to predict, especially the future”*…

An amusing attempt to take the long view…

W. Cade Gall’s delightful “Future Dictates of Fashion” — published in the June 1893 issue of The Strand magazine — is built on the premise that a book from a hundred years in the future (published in 1993) called The Past Dictates of Fashion has been inexplicably found in a library. The piece proceeds to divulge this mysterious book’s contents — namely, a look back at the last century of fashion, which, of course, for the reader in 1893, would be looking forward across the next hundred years. In this imagined future, fashion has become a much respected science (studied in University from the 1950s onwards) and is seen to be “governed by immutable laws”.

The designs themselves have a somewhat unaccountable leaning toward the medieval, or as John Ptak astutely notes, “a weird alien/Buck Rogers/Dr. Seuss/Wizard of Oz quality”. If indeed this was a genuine attempt by the author Gall to imagine what the future of fashion might look like, it’s fascinating to see how far off the mark he was (excluding perhaps the 60s and 70s), proving yet again how difficult it is to predict future aesthetics. It is also fascinating to see how little Gall imagines clothes changing across the decades (e.g. 1970 doesn’t seem so different to 1920) and to see which aspects of his present he was unable to see beyond (e.g. the long length of women’s skirts and the seemingly ubiquitous frill). As is often the case when we come into contact with historic attempts to predict a future which for us is now past, it is as if glimpsing into another possible world, a parallel universe that could have been (or which, perhaps, did indeed play out “somewhere”)…

More at: “Sartorial Foresight: Future Dictates of Fashion (1893)” in @PublicDomainRev.

Browse the original on the Internet Archive.

* Niels Bohr (after a Danish proverb)

###

As we ponder the problem of prognostication, we might recall that it was on this date in 1934 that producer Samuel Goldwyn bought the film rights to L. Frank Baum’s book, The Wonderful Wizard of Oz, which had been a hit since its publication in 1900 but had until then been considered both inappropriate (as it was a “children’s book”) and too hard to film. Goldwyn was banking on the drawing power of his child star Shirley Temple, the original choice for Dorothy; but (as everyone knows) the role went to Judy Garland who won a special “Best Juvenile Performer” Oscar and made the award-winning song, “Somewhere Over the Rainbow” a huge hit.

The film was only a modest box-office success on release… but has of course become a beloved classic.

source

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”*…

It’s very hard, historian of science Benjamin Breen explains, to understand the implications of a scientific revolution as one is living through it…

2023 is shaping up to be an important year in the history of science. And no, I’m not talking about the reputed room-temperature semiconductor LK-99, which seems increasingly likely to be a dud.

Instead, I’m talking about the discoveries you’ll find in Wikipedia’s list of scientific advances for 2023. Here are some examples:

• January: Positive results from a clinical trial of a vaccine for RSV; OpenAI’s ChatGPT enters wide use.

February: A major breakthrough in quantum computing; announcement of a tiny robot that can clean blood vessels; more evidence for the ability of psychedelics to enhance neuroplasticity; major developments in biocomputers.

• March: OpenAI rolls out GPT-4; continued progress on mRNA vaccines for cancer.

• April: NASA announces astronaut crew who will orbit the moon next year; promising evidence for gene therapy to fight Alzheimer’s.

• May: Scientists use AI to translate brain activity into written words; promising results for a different Alzheimer’s drug; human pangenome sequenced (largely by a team of UCSC researchers — go Banana Slugs!); more good news about the potential of mRNA vaccines for fighting cancer.

And skipping ahead to just the past two weeks:

• nuclear fusion ignition with net energy gain was achieved for the second time

• a radical new approach to attacking cancer tumors entered Phase 1 trials in humans

• and — announced just as I was writing this [in August, 2023] — one of the new crop of weight loss drugs was reported to cut rates of heart attack and stroke in high-risk individuals by 20% (!).

Also in January of 2023: the New York Times asked “What Happened to All of Science’s Big Breakthroughs?”

The headline refers to an article published in Nature which argues that there has been a steady drop in “disruptive” scientific and technological breakthroughs between the years of 1945 and 2010. Basically, it’s a restatement of the concept of a “Great Stagnation” which was proposed by the economist Tyler Cowen in 2011. Though the paper cites everyone from Cowen to Albert Einstein and Isaac Newton, it’s worth noting that it doesn’t cite a single historian of science or technology (unless Alexandre Koyré counts)…

Naturally, as a historian of science and medicine, I think that there really are important things to learn from the history of science and medicine! And what I want to argue for the rest of this post boils down to two specific lessons from that history:

  1. People living through scientific revolutions are usually unaware of them — and, if they are, they don’t think about them in the same way that later generations do.
  2. An apparent slowdown in the rate of scientific innovation doesn’t always mean a slowdown in the impacts of science. The history of the first scientific revolution — the one that began in the famously terrible seventeenth century — suggests that the positive impacts of scientific innovation, in particular, are not always felt by the people living throughthe period of innovation. Periods when the pace of innovation appears to slow down may also be eras when society becomes more capable of benefitting from scientific advances by learning how to mitigate previously unforeseen risks.

[… There follows a fascinating look back at the 1660s– the “original” scientific revolution– at Boyle, Newton, at what they hoped/expected, and at how that differed for what their work and that of their colleagues actually yielded. Then the cautionary tale of Thomas Midgley..]

As we appear to be entering a new era of rapid scientific innovation in the 2020s, it is worth remembering that it often takes decades before the lasting social value of a technical innovation is understood — and decades more before we understand its downsides.

In the meantime, I’m pretty psyched about the cancer drugs…

As Thomas Kuhn observed, “The historian of science may be tempted to exclaim that when paradigms change, the world itself changes with them.”

On the difficulty of knowing the outcomes of a scientific revolution from within it: “Experiencing scientific revolutions: the 1660s and the 2020s,” from @ResObscura.

* Max Planck

###

As we try to see, we might spare a thought for William Seward Burroughs; he died on this date in 1898. And inventor who had worked in a bank, he invented the world’s first commercially viable recording adding machine and pioneered of its manufacture. The very successful company that he founded went on to become Unisys, which was instrumental in the development of computing… the implications of which we’re still discovering– and Burroughs surely never saw.

Nor, one reckons, did he imagine that his grandson, William Seward Burroughs II, would become the cultural figure that he did.

source

“The future belonged to the showy and the promiscuous”*…

Emily J. Orlando on the enduring relevance and the foresight of Edith Wharton

If ever there were a good time to read the American writer Edith Wharton, who published over forty books across four decades, it’s now. Those who think they don’t know Wharton might be surprised to learn they do. A reverence for Wharton’s fiction informs HBO’s Sex and the City, whose pilot features Carrie Bradshaw’s “welcome to the age of un-innocence.” The CW’s Gossip Girl opens, like Wharton’s The House of Mirth, with a bachelor spying an out-of-reach love interest at Grand Central Station while Season 2 reminds us that “Before Gossip Girl, there was Edith Wharton.”

But why Wharton? Why now? Perhaps it’s because for all its new technologies, conveniences, and modes of travel and communication, our own “Gilded Age” is a lot like hers [see here]. For the post-war and post-flu-epidemic climate that engendered her Pulitzer-Prize-winning novel The Age of Innocence is not far removed from our post-COVID-19 reality. In both historical moments, citizens of the world have witnessed a retreat into conservatism and a rise of white supremacy.

Fringe groups like the “Proud Boys” and “QAnon” and deniers of everything from the coronavirus to climate change are invited to the table in the name of free speech and here Wharton’s distrust of false narratives resonates particularly well. Post-9/11 calls for patriotism and the alignment of the American flag with one political party harken back to Wharton’s poignant questioning, in a 1919 letter, of the compulsion to profess national allegiance:

how much longer are we going to think it necessary to be “American” before (or in contradistinction to) being cultivated, being enlightened, being humane, & having the same intellectual discipline as other civilized countries?

Her cosmopolitan critique of nationalist fervor remains instructive to us today…

Eminently worth reading in full (then picking up one of Wharton’s wonderful novels): “How Edith Wharton Foresaw the 21st Century,” in @lithub.

See also: “These days, the bigger the company, the less you can figure out what it does.”

* Edith Wharton, The Custom of the Country

###

As we prize perspicacity, we might recall that it was on this date in 1884, in the midst of the Gilded Age, that Harper’s Bazaar proclaimed, “…it is not convenable, according to European ideas, to wear a loose flowing robe of the tea-gown pattern out of one’s bedroom or boudoir. It has been done by ignorant people at a watering-place, but it never looks well. It is really an undress, although lace and satin may be used in its composition. A plain, high, and tight-fitting garment is much the more elegant dress for the afternoon teas as we give them.”

Embraced by artists and reformers, the Aesthetic Dress Movement of the 1870s and 1880s was a non-mainstream movement within fashion that looked to the Renaissance and Rococo periods for inspiration. The movement began in response to reformers seeking to call attention to the unhealthy side effects of wearing a corset, thus, the main feature of this movement in women’s dress was the loose-fitting dress, which was worn without a corset. Artists and progressive social reformers embraced the Aesthetic Dress movement by appearing uncorseted and in loose-fitting dresses in public. For many that fell into these categories, Aesthetic Dress was an artistic statement. Appearing in public uncorseted was considered controversial for women, as it suggested intimacy. In fact, many women across the country were arrested for appearing in public wearing Aesthetic costumes, as authorities and more conservative citizens associated this type of dress with prostitution.

But for most wealthy women, the influence of the Aesthetic Dress movement on their wardrobes took the form of the Tea Gowns. Like most dresses that could be considered “Aesthetic,” Tea Gowns were loose and meant to be worn without a corset. However, they were less controversial than the Aesthetic ensembles of more artistic and progressive women. This is because they were not typically worn in public or in the company of the opposite sex. Tea Gowns were a common ensemble for hosts of all-female teas that were held in the wearer’s home. Thus, because no men were in attendance, Tea Gowns were socially acceptable in these scenarios. Mainstream magazines like Harper’s Bazar were not especially keen on the Tea Gown and cautioned their readers not to appear wearing one in public. 

“Gilded Age Fashion”

For a sense of what was at stake, see “The Corset X-Rays of Dr Ludovic O’Followell (1908)

source

Written by (Roughly) Daily

January 26, 2023 at 1:00 am