(Roughly) Daily

Posts Tagged ‘Steven Johnson

“We are saved by making the future present to ourselves”*…

Recently, Steven Johnson (and here) received the Pioneer Award in Positive Psychology from UPenn’s Positive Psychology Center. Presented by his friend and mentor Marty Seligman, it honored Johnson’s “work over the years advancing the cause of human flourishing.”

From his acceptance speech…

… I’ve always been drawn to… long-term perspectives, where you position yourself… in the larger context of hundreds or thousands of years of human suffering and progress. Some of my California friends even built an entire organization to celebrate that long-term view: the Long Now Foundation, which is dedicated to thinking on the scale of centuries or millennia, encouraging us to get out of the 24-hour news cycle that dominates so much of our lives today. A technologically advanced culture cannot flourish without getting better at anticipating the future. That’s why science fiction matters. That’s why scenario planning matters. That’s why complex software simulations that enable us to forecast things like climate change on the scale of decades matter. 

And here I want to bring us back to another idea that Marty Seligman has been an advocate for. Almost ten years ago, he edited a collection of essays called Homo Prospectus which had a huge influence on my thinking about the world. The core idea behind that book was that a defining superpower of human beings is our ability to mentally time-travel to possible future states, and think about how we might organize our activities to arrive at those imagined future outcomes. 

“What best distinguishes our species,” he wrote in the introduction to that book, “is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain.” 

It is unclear whether nonhuman animals have any real concept of the future at all. Some organisms display behavior that has long-term consequences, like a squirrel’s burying a nut for winter, but those behaviors are all instinctive. The latest studies of animal cognition suggest that some primates and birds may carry out deliberate preparations for events that will occur in the near future. But making decisions based on future prospects on the scale of months or years — even something as simple as planning a gathering of the tribe a week from now — would be unimaginable even to our closest primate relatives. If the Homo prospectus theory is correct, those limited time-traveling skills explain an important piece of the technological gap that separates humans from all other species on the planet. It’s a lot easier to invent a new tool if you can imagine a future where that tool might be useful. What gave flight to the human mind and all its inventiveness may not have been the usual culprits of our opposable thumbs or our gift for language. It may, instead, have been freeing our minds from the tyranny of the present.

The problem now is that the future is getting increasingly hard to predict, in large part because of what has started to happen with artificial intelligence over the past few years. I’ve spent a lot of my career looking at transformative changes in technology, and I’ve come to believe that what we’re experiencing right now is going to be the most seismic, the most far-reaching transformation of my lifetime, bigger than the personal computer, bigger than the Internet and the Web. And while there is much to debate about what the impact of this revolution is going to be for the job market, for politics, and just about any other field, there is growing consensus that it is going to provide an enormous lift to medicine and human health. The Nobel Prize for chemistry going to the AlphaFold team last week was arguably the most dramatic illustration of the promise here. Earliest this month, Dario Amodei—the founder of the AI lab Anthropic, makers of Claude–published a 13,000 word piece on where he thought we were headed with what he calls “powerful AI” in the next decade or two. The line that really struck me in the piece was this:

My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years… a compressed 21st century.

Whether or not something that dramatic does come to pass—and I think we have to take the possibility of it seriously—it seems clear that given the kind of biological and medical advances that AI will likely unlock, there is significant headroom left in the story of extended human lifespan, perhaps even a sea change in how we age. That is, on one level, incredibly hopeful news. But it is also the kind of change that will inevitably have enormous secondary effects. To understand just how momentous those changes could be, take a look at this chart:

That’s the 6,000 year history of human population growth. You might notice, if you really squint your eyes, that something interesting appears to happen about 150 years ago. After millennia of slow and steady growth, human population growth went exponential. And that’s not the result of people having more babies—the human birth rate was declining rapidly during much of that period. That’s the impact of people not dying. And while that is on one level incredibly good news, it is also in a very real sense one of the two most important drivers of climate change. If we had transferred to a fossil-fuel-based economy but kept our population at 1850 levels, we would have no climate change issues whatsoever—there simply wouldn’t be enough carbon-emitting lifestyles to make a measurable difference in the atmosphere.

The key idea here is that no change this momentous is entirely positive in its downstream effects. Trying to anticipate those effects, and mitigate the negative ones, is going to take all of our powers of prospection. 

When I was putting together my thoughts for this talk, my mind went back to the one time I spoke with Marty, about five years ago, when I was writing about cognitive time travel for the Times Magazine. As usual, I was incredibly behind in actually doing the reporting for the piece, and I’d called Marty desperate for a few quotes on a tight deadline. He very generously found time for me, but he had to do the call from an animal hospital, because as it happens he and his family were in the middle of putting their dog down. So our very first moments in conversation with each other plunged right into the depths of loss and grieving and the strange bonds that form between animals and humans. There was no small talk. 

As I said earlier, death is, in the most basic sense, the termination point of human flourishing. But it’s also the shadow that hovers over us while we are still alive. We have done so much to minimize that shadow over the past century or two, going from a world where it was the norm for a third of your children to die before adulthood to a world where less than one percent do. But what does it mean for human flourishing if that runaway life expectancy curve that we’ve been riding for the past century keeps ascending? What does it mean if AI starts out-performing us at complex cognitive tasks? How do we flourish in that brave new world? Do we take on a new responsibility—not just ensuring the path of human flourishing, but also the flourishing of our AI companions? These are all difficult questions precisely because of time. The rate of change is so extreme right now we don’t have as much time to learn, and adapt. The doubling of human life expectancy was a process that really unfolded over two hundred years, and we’re still dealing with its unintended consequences. What happens if that magnitude of change gets compressed down to a decade?

I don’t know the answers to those questions yet, I’m sorry to report. But maybe spelling them out together helps explain something about what I’ve tried to do with my career, which I think from afar can sometimes seem a bit random, bouncing back and forth between writing about long-term decision making or exploring the history of human life expectancy and building software with language models. This award is called the Pioneer Award, and while I’m deeply honored to receive it, I don’t think of myself so much as a pioneer in any of these fields, but rather as someone who has consistently tried to find a place to work that was adjacent to the most important trends in human flourishing, so that I could help shine light on them, explain them to a wider audience, and in the case of my work with AI, nudge them in a positive direction to the best of my ability. That you all have recognized me for this work—pioneer or not—means an enormous amount to me. You can be sure I will do my best to savor it…

On progress, the “compressed 21st century,” and the importance of foresight: “Ways of Flourishing,” from @stevenbjohnson in his newsletter Adjacent Possible. Eminently worth reading in full.

(Image above: source)

* George Eliot

###

As we take the long view, we might recall that it was on this date in 1873 that Illinois farmer Joseph F. Glidden applied for a patent on barbed wire. It became the first commercially-feasible barbed wire in 1874 (an earlier, less successful patent preceded his)– a product that would transform the West. Before his innovation, settlers on the treeless plains had no easy way to fence livestock away from cropland, and ranchers had no way to prevent their herds from roaming far and wide. Glidden’s barbed wire opened the plains to large-scale farming, and closed the open range, bringing the era of the cowboy and the round-up to an end. With his partner, Isaac L. Ellwood, Glidden formed the Barb Fence Company of De Kalb, Illinois, and quickly became one of the wealthiest men in the nation.

source

“O brave new world, that has such people in ‘t!”*…

The estimable Steven Johnson suggests that the creation of Disney’s masterpiece, Snow White, gives us a preview of what may be coming with AI algorithms sophisticated enough to pass for sentient beings…

… You can make the argument that the single most dramatic acceleration point in the history of illusion occurred between the years of 1928 and 1937, the years between the release of Steamboat Willie [here], Disney’s breakthrough sound cartoon introducing Mickey Mouse, and the completion of his masterpiece, Snow White, the first long-form animated film in history [here— actually the first full-length animated feature produced in the U.S; the first produced anywhere in color]. It is hard to think of another stretch where the formal possibilities of an artistic medium expanded in such a dramatic fashion, in such a short amount of time.

[There follows an fascinating history of the Disney Studios technical innovations that made Snow White possible, and an account of the film;’s remarkable premiere…]

In just nine years, Disney and his team had transformed a quaint illusion—the dancing mouse is whistling!—into an expressive form so vivid and realistic that it could bring people to tears. Disney and his team had created the ultimate illusion: fictional characters created by hand, etched onto celluloid, and projected at twenty-four frames per second, that were somehow so believably human that it was almost impossible not to feel empathy for them.

Those weeping spectators at the Snow White premiere signaled a fundamental change in the relationship between human beings and the illusions concocted to amuse them. Complexity theorists have a term for this kind of change in physical systems: phase transitions. Alter one property of a system—lowering the temperature of a cloud of steam, for instance—and for a while the changes are linear: the steam gets steadily cooler. But then, at a certain threshold point, a fundamental shift happens: below 212 degrees Fahrenheit, the gas becomes liquid water. That moment marks the phase transition: not just cooler steam, but something altogether different.

It is possible—maybe even likely—that a further twist awaits us. When Charles Babbage encountered an automaton of a ballerina as a child in the early 1800s, the “irresistible eyes” of the mechanism convinced him that there was something lifelike in the machine.  Those robotic facial expressions would seem laughable to a modern viewer, but animatronics has made a great deal of progress since then. There may well be a comparable threshold in simulated emotion—via robotics or digital animation, or even the text chat of an AI like LaMDA—that makes it near impossible for humans not to form emotional bonds with a simulated being. We knew the dwarfs in Snow White were not real, but we couldn’t keep ourselves from weeping for their lost princess in sympathy with them. Imagine a world populated by machines or digital simulations that fill our lives with comparable illusion, only this time the virtual beings are not following a storyboard sketched out in Disney’s studios, but instead responding to the twists and turns and unmet emotional needs of our own lives. (The brilliant Spike Jonze film Her imagined this scenario using only a voice.) There is likely to be the equivalent of a Turing Test for artificial emotional intelligence: a machine real enough to elicit an emotional attachment. It may well be that the first simulated intelligence to trigger that connection will be some kind of voice-only assistant, a descendant of software like Alexa or Siri—only these assistants will have such fluid conversational skills and growing knowledge of our own individual needs and habits that we will find ourselves compelled to think of them as more than machines, just as we were compelled to think of those first movie stars as more than just flickering lights on a fabric screen. Once we pass that threshold, a bizarre new world may open up, a world where our lives are accompanied by simulated friends…

Are we in for a phase-shift in our understanding of companionship? “Natural Magic,” from @stevenbjohnson, adapted from his book Wonderland: How Play Made The Modern World.

And for a different, but aposite perspective, from the ever-illuminating L. M. Sacasas (@LMSacasas), see “LaMDA, Lemoine, and the Allures of Digital Re-enchantment.”

* Shakespeare, The Tempest

###

As we rethink relationships, we might recall that it was on this date in 2007 that the original iPhone was introduced. Generally downplayed by traditional technology pundits after its announcement six months earlier, the iPhone was greeted by long lines of buyers around the country on that first day. Quickly becoming a phenomenon, one million iPhones were sold in only 74 days. Since those early days, the ensuing iPhone models have continued to set sales records and have radically changed not only the smartphone and technology industries, but the world in which they operate as well.

The original iPhone

source

“When the graphs were finished, the relations were obvious at once”*…

We can only understand what we can “see”…

… this long-forgotten, hand-drawn infographic from the 1840s… known as a “life table,” was created by William Farr, a doctor and statistician who, for most of the Victorian era, oversaw the collection of public health statistics in England and Wales… it’s a triptych documenting the death rates by age in three key population groups: metropolitan London, industrial Liverpool, and rural Surrey.

With these visualizations, Farr was making a definitive contribution to an urgent debate from the period: were these new industrial cities causing people to die at a higher rate? In some ways, with hindsight, you can think of this as one of the most crucial questions for the entire world at that moment. The Victorians didn’t realize it at the time, but the globe was about to go from less than five percent of its population living in cities to more than fifty percent in just about a century and a half. If these new cities were going to be killing machines, we probably needed to figure that out.

It’s hard to imagine just how confusing it was to live through the transition to industrial urbanism as it was happening for the first time. Nobody really had a full handle on the magnitude of the shift and its vast unintended consequences. This was particularly true of public health. There was an intuitive feeling that people were dying at higher rates than they had in the countryside, but it was very hard even for the experts to determine the magnitude of the threat. Everyone was living under the spell of anecdote and availability bias. Seeing the situation from the birds-eye view of public health data was almost impossible…

The images Farr created told a terrifying and unequivocal story: density kills. In Surrey, the increase of mortality after birth is a gentle slope upward, a dune rising out of the waterline. The spike in Liverpool, by comparison, looks more like the cliffs of Dover. That steep ascent condensed thousands of individual tragedies into one vivid and scandalous image: in industrial Liverpool, more than half of all children born were dead before their fifteenth birthday.

The mean age of death was just as shocking: the countryfolk were enjoying life expectancies close to fifty, likely making them some of the longest-lived people on the planet in 1840. The national average was forty-one. London was thirty-five. But Liverpool—a city that had undergone staggering explosions in population density, thanks to industrialization—was the true shocker. The average Liverpudlian died at the age of twenty-five, one of the lowest life expectancies ever recorded in that large a human population.

There’s a natural inclination to think about innovation in human health as a procession of material objects: vaccines, antibiotics, pacemakers. But Farr’s life tables are a reminder that new ways of perceiving the problems we face, new ways of seeing the underlying data, are the foundations on which we build those other, more tangible interventions. Today cities reliably see life expectancies higher than rural areas—a development that would have seemed miraculous to William Farr, tabulating the data in the early 1840s. In a real sense, Farr laid the groundwork for that historic reversal: you couldn’t start to tackle the problem of how to make industrial cities safer until you had first determined that the threat was real.

Why the most important health innovations sometimes come from new ways of seeing: “The Obscure Hand-Drawn Infographic That Changed The Way We Think About Cities,” from Steven Johnson (@stevenbjohnson). More in his book, Extra Life, and in episode 3 of the PBS series based on it.

* J. C. R. Licklider

###

As we investigate infographics, we might send carefully calculated birthday greetings to Lewis Fry Richardson; he was born on this date in 1881.  A mathematician, physicist, and psychologist, he is best remembered for pioneering the modern mathematical techniques of weather forecasting.  Richardson’s interest in weather led him to propose a scheme for forecasting using differential equations, the method used today, though when he published Weather Prediction by Numerical Process in 1922, suitably fast computing was unavailable.  Indeed, his proof-of-concept– a retrospective “forecast” of the weather on May 20, 1910– took three months to complete by hand. (in fairness, Richardson did the analysis in his free time while serving as an ambulance driver in World War I.)  With the advent of modern computing in the 1950’s, his ideas took hold.  Still the ENIAC (the first real modern computer) took 24 hours to compute a daily forecast.  But as computing got speedier, forecasting became more practical.

Richardson also yoked his forecasting techniques to his pacifist principles, developing a method of “predicting” war.  He is considered (with folks like Quincy Wright and Kenneth Boulding) a father of the scientific analysis of conflict.

And Richardson helped lay the foundations for other fields and innovations:  his work on coastlines and borders was influential on Mandelbrot’s development of fractal geometry; and his method for the detection of icebergs anticipated the development of sonar.

 source