“Technological change is not additive; it is ecological. A new technology does not merely add something; it changes everything”*…
Insofar as (at the risk of sounding tautological) transformative technologies are concerned, Neil Postman is surely right. But then, as Roy Amara pointed out, “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” David Oks uses a common myth of technological replacement to illustrate– and more specifically, to observe that there’s a lot more to replacing labor than just automating tasks.
He begins by recounting an interview a few months ago of J. D. Vance by Ross Douthat in which (in response to a question from Douthat about the potential downsides of AI, in particular the prospect of its “obsoleting” human workers) Vance responded sanguinely, arguing that ATM machines didn’t eliminate bank tellers. Indeed, Vance suggested, “we have more bank tellers today than we did when the ATM was created, but they’re doing slightly different work…”
There are two interesting things about what Vance said, both relating to the example that he chose about bank tellers and ATMs.
The first thing is what it tells us about who J. D. Vance is. The bank teller story—how ATMs were predicted to increase bank teller unemployment, but in fact did not—isn’t a story you’ll hear from politicians; in fact, for a long time, Barack Obama would claim, incorrectly, that ATMs had decreased the number of bank tellers, in order to suggest that the elevated unemployment rate during his presidency was due to productivity gains from technology. I’ve never heard a politician cite the bank teller story before: but I have seen the bank teller story cited in a lot of blogs. I’ve seen it cited, for example, by Scott Alexander and Matt Yglesias and Freddie deBoer; and I’ve heard it, upstream of the humble bloggers, from such fine economists as Daron Acemoglu and David Autor. The story of how ATMs didn’t automate bank tellers is, indeed, something of a minor parable of the economics profession…
… But the other thing about the bank teller story that Vance cites is that it’s wrong. We do not, contrary to what Vance claims, have “more bank tellers today than we did when the ATM was created”: we in fact have far fewer. The story he tells Douthat might have been true in 2000 or 2005, but it hasn’t been true for years. Bank teller employment has fallen off a cliff. Here is a graph of bank teller employment since 2000:
So what happened to bank tellers? Autor, Bessen, Vance, and the like are right to point out that ATMs did not reduce bank teller employment. But they miss the second half of the story, which is that another technology did. And that technology was the iPhone. The huge decline in bank teller employment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made possible.
But why? Why did the ATM, literally called the automated teller machine, not automate the teller, while an entirely orthogonal technology—the iPhone—actually did?
The answer, I think, is complementarity.
In my last piece, on why I don’t think imminent mass job loss from AI is likely, I talked a lot about complementarity. The core point I made was that labor substitution is about comparative advantage, not absolute advantage: the relevant question for labor impacts is not whether AI can do the tasks that humans can do, but rather whether the aggregate output of humans working with AI is inferior to what AI can produce alone. And I suggested that given the vast number of frictions and bottlenecks that exist in any human domain—domains that are, after all, defined around human labor in all its warts and eccentricities, with workflows designed around humans in mind—we should expect to see a serious gap between the incredible power of the technology and its impacts on economic life.
That gap will probably close faster than previous gaps did: AI is not “like” electricity or the steam engine; an AI system is literally a machine that can think and do things itself. But the gap exists, and will exist even as the technology continues to amaze us with what it can now accomplish.
But by talking about why ATMs didn’t displace bank tellers but iPhones did, I want to highlight an important corollary, which is that the true force of a technology is felt not with the substitution of tasks, but the invention of new paradigms. This is the famous lesson of electricity and productivity growth, which I’ll return to in a future piece. When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant…
[Oks unpacks the stories of the ATM’s and iPhone’s impact on banking, then looks ahead, by anaology, to what might be in store with AI. He concludes…]
… I am not a “denier” on the question of technological job loss; Vance’s blithe optimism is not mine. But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.
The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm. At some point in the life of every technology, old workflows are replaced by new ones, and we discover the paradigms in which the full productive force of a technology can best be expressed. In the past this has simply been a fact of managerial turnover or depreciation cycles. But with AI it will likely be the sheer power of the technology itself, which really is wholly unlike anything that has come before, and unlike electricity or the steam engine will eventually be able to build the structures that harness its powers by itself.
I don’t think we’ve really yet learned what those new structures will look like. But, at the limit, I don’t quite know why humans have to be involved in those: though I suspect that by the time we’re dealing with the fully-automated organizations of the future, our current set of concerns will have been largely outmoded by new and quite foreign ones, as has always been the case with human progress.
But, however optimistic I might be about the human future, I don’t think it’s worth leaning on the history of past technologies for comfort. The ATM parable is a comforting narrative; and in times of uncertainty and fear we search naturally for solace and comfort wherever it may come. But even when it comes to bank tellers, it’s only the first half of the story…
Eminently worth reading in full: “Why ATMs didn’t kill bank teller jobs, but the iPhone did.”
As to whether the wisdom of Amara and Oks is widely-shared, consider this from Crunchbase:
Crunchbase data shows global venture investment totaled $189 billion in February — the largest startup funding month on record — although 83% of capital raised went to just three companies. They include OpenAI, which raised $110 billion, also in the largest round ever raised by a private, venture-backed company.
The record month for venture funding took place against the backdrop of a trillion-dollar stock market drop as AI compute and tooling unsettled leading public software companies. [See also here.]
All told, venture investment was up close to 780% year over year from the $21.5 billion raised by startups in February 2025.
OpenAI was not the only company to raise tens of billions of dollars last month. Its closest rival, Anthropic, raised $30 billion, marking the third-largest venture round on record.
Waymo, Alphabet‘s self-driving division, raised $16 billion. Together, those three rounds totaled $156 billion, representing 83% of the global venture capital raised in February.
A further four companies each raised $1 billion or more last month: Tokyo-based semiconductor manufacturer Rapidus; London-based self-driving platform Wayve; San Francisco-based AI for robotics World Labs; and Sunnyvale, California-based AI semiconductor company Cerebras Systems.
These massive rounds were led by strategic corporate investors, a host of private equity and alternative investors, as well as a few multistage venture investors and a government agency…
– “Massive AI Deals Drive $189B Startup Funding Record In February While Public Software Stocks Reel“
As Carlota Perez explains in Technological Revolutions and Financial Capital, we’re forever blowing bubbles…
* Neil Postman
###
As we contemplate change, we might send sanitary, odor-free birthday greetings to Sir Joseph William Bazalgette; he was born on this date in 1819. A civil engineer, he became chief engineer of London’s Metropolitan Board of Works, in which role his major achievement was a response to the “Great Stink of 1858,” in July and August of 1858, during which very hot weather exacerbated the smell of untreated human waste and industrial effluent. Bazalgette oversaw the creation of a sewer network for central London which addressed the problem– and was instrumental in relieving the city from cholera epidemics, in beginning the cleansing of the River Thames, and in creating (a crucial part of) the infrastructure that underlay its extraordinary growth over the next century.
“You live and learn. At any rate, you live.”*…
… and to the extent that we care about our democracy, that’s an issue.
In an article based on his recent Sakurada-Kai Foundation Oxbridge Lecture at Keio University, Tokyo, John Dunn argues that our democracies depend on our picking up the pace of learning. The abstract:
There cannot be a coherent democratic theory because democracy is not a determinate topic. Representative democracy is a relatively modern regime form. It now needs rehabilitation because so many instances have performed poorly for so long. Representative democracy is now also an aging regime. As a type of state, it is subject to the territorial contentiousness and contested legitimacy of any state. It claims its legitimacy from iterative popular choice, but the plausibility of that claim is increasingly strained by the drastic disparities in life chances reproduced through the property systems it protects. The inherent difficulty for citizens to judge how to advance their collective interests is aggravated by the recent transformation of the information economy. In the cumulative damage inflicted by climate change it faces a deadlier peril than any previous regime and one which only a citizenry that can enlighten itself in time can reasonably hope to nerve itself to meet…
There follows a fascinating– and provocative– elaboration of this thesis in which Dunn considers the history of democracy and the alternatives with which it has, since its inception, vied. He concludes in a bracing fashion…
… The varieties of autocracy which will be on offer wherever the rest of the world has the opportunity to take them up will be without exception the reverse of enlightened – instrumentally and compulsively bound to the extremes of obscurantism, Darkness as a full-on fideist commitment, deliberate self-blinding as a navigational strategy. Move fast, break lots, and never pause to inspect the wreckage.
Representative democracy has recently proved itself a poor structure for collective enlightenment, but the case for it depends on its at least not precluding that, its being still open to making the attempt, and responding to what it can contrive to learn. The most optimistic vision of democracy in action has always seen it as an opportunity for collective self-education on the content of shared goods and the means to achieve them. If that is scarcely a realist picture of what it has ever been, at least it is an image of the right shape. It is too late to ask who will educate the educators. At this point we must educate ourselves together and heed the lessons of that education or we must and will die – not just each of us one by one, as we were always fated to do, but soon enough all of us and for ever…
Eminently worth reading in full: “Can Democracy be Rehabilitated?“
Apposite: “How American Democracy Fell So Far Behind,” from Steven Levitsky and Daniel Ziblatt (gift article– and source of the image above)
* Douglas Adams, Mostly Harmless
###
As we devote ourselves to democracy, we might spare a thought for Ludwig van Beethoven; he died on this date in 1827. A crucial figure in the transition between the Classical and Romantic eras in Western music, he remains one of the most famous and influential of all composers. His best-known compositions include 9 symphonies, 5 concertos for piano, 32 piano sonatas, and 16 string quartets. He also composed other chamber music, choral works (including the celebrated Missa Solemnis), a single opera (Fidelio), and numerous songs.
Relevantly to the piece above…
Beethoven admired the ideals of the French Revolution, so he dedicated his third symphony to Napoleon Bonaparte… until Napoleon declared himself emperor. Beethoven then sprung into a rage, ripped the front page from his manuscript and scrubbed out Napoleon’s name…
Beethoven’s temper and Symphony No. 3 ‘Eroica’

“It’s the bell curve again”*…
Joseph Howlett on how the central limit theorem, which started as a bar trick for 18th-century gamblers, became something on which scientists rely every day…
No matter where you look, a bell curve is close by.
Place a measuring cup in your backyard every time it rains and note the height of the water when it stops: Your data will conform to a bell curve. Record 100 people’s guesses at the number of jelly beans in a jar, and they’ll follow a bell curve. Measure enough women’s heights, men’s weights, SAT scores, marathon times — you’ll always get the same smooth, rounded hump that tapers at the edges.
Why does the bell curve pop up in so many datasets?
The answer boils down to the central limit theorem, a mathematical truth so powerful that it often strikes newcomers as impossible, like a magic trick of nature. “The central limit theorem is pretty amazing because it is so unintuitive and surprising,” said Daniela Witten, a biostatistician at the University of Washington. Through it, the most random, unimaginable chaos can lead to striking predictability.
It’s now a pillar on which much of modern empirical science rests. Almost every time a scientist uses measurements to infer something about the world, the central limit theorem is buried somewhere in the methods. Without it, it would be hard for science to say anything, with any confidence, about anything.
“I don’t think the field of statistics would exist without the central limit theorem,” said Larry Wasserman, a statistician at Carnegie Mellon University. “It’s everything.”
Perhaps it shouldn’t come as a surprise that the push to find regularity in randomness came from the study of gambling…
Read on for the fascinating story of: “The Math That Explains Why Bell Curves Are Everywhere,” from @quantamagazine.bsky.social.
Howlett concludes by observing that “The central limit theorem is a pillar of modern science, ultimately, because it’s a pillar of the world around us. When we combine lots of independent measurements, we get clusters. And if we’re clever enough, we can use those clusters to find out something interesting about the processes that made them”– which follows from the story he shares.
Still, we’d do well to remember that there are limits to its applicability, both descriptively (as Nassim Nicholas Taleb points out, “because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty”) and prescriptively (as Benjamim Bloom argues, “The bell-shaped curve is not sacred. It describes the outcome of a random process. Since education is a purposeful activity….the achievement distribution should be very different from the normal curve if our instruction is effective).
For (much) more, see Peter Bernstein‘s wonderful Against the Gods: The Remarkable Story of Risk
* Robert A. Heinlein, Time Enough for Love
###
As we noodle on the normal distribution, we might send curve-shattering birthday greetings to Norman Borlaug; he was born on ths date in 1914. An agronomist, he developed and led initiatives worldwide that contributed to the voluminous increases in agricultural production we call “the Green Revolution.” Borlaug was awarded multiple honors for his work, including the Nobel Peace Prize, the Presidential Medal of Freedom, and the Congressional Gold Medal; he’s one of only seven people to have received all three of those awards.
“A billion here, a billion there, and pretty soon you’re talking real money”
See how the amount donated by Americans to charity per year compares to the size of outstanding student debt. Or how Walmart’s revenue measures up against Elon Musk’s wealth. Or how the U.S. military budget stacks up against China’s… and so much more.
From the estimable David McCandless and his wonderful site Information is Beautiful, an illustration of how expenses and wealth that run to over a billion dollars compare.
Then peruse “$Trillions.”
###
As we ponder the pecuniary, we might recall that on this date in 1989, Exxon Valdez, an oil supertanker owned by Exxon Shipping Company, bound for Long Beach, California, struck Prince William Sound‘s Bligh Reef, 6 mi west of Tatitlek, Alaska. The tanker spilled more than 10 million US gallons of crude oil over the next few days.
The Exxon Valdez spill is the second largest in U.S. waters, after the 2010 Deepwater Horizon oil spill, in terms of volume of oil released. It is the costliest disaster ever with no direct human fatalities. The oil, extracted from the Prudhoe Bay Oil Field, eventually affected 1,300 miles of coastline, of which 200 miles were heavily or moderately oiled; and it wreaked havoc with the habitats salmon, sea otters, seals, and seabirds in its path.
Exxon spent an estimated $2 billion cleaning up the spill and a further $1 billion to settle related civil and criminal charges. Exxon was also assessed another $2.5 billion in punitive damages in a suit (Exxon v. Baker)… but that was reduced by the Supreme Court to roughly $500 million. Exxon remained hugely profitable– the process of payment was drawn out over decades and long term damage continues and is not funded by Exxon. Hence, the Exxon spill is often cited as shorthand in conversations about corporate responsibility as a case of accountability for societal damage inadequately enforced.












You must be logged in to post a comment.