Posts Tagged ‘paradigms’
“Technological change is not additive; it is ecological. A new technology does not merely add something; it changes everything”*…
Insofar as (at the risk of sounding tautological) transformative technologies are concerned, Neil Postman is surely right. But then, as Roy Amara pointed out, “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” David Oks uses a common myth of technological replacement to illustrate– and more specifically, to observe that there’s a lot more to replacing labor than just automating tasks.
He begins by recounting an interview a few months ago of J. D. Vance by Ross Douthat in which (in response to a question from Douthat about the potential downsides of AI, in particular the prospect of its “obsoleting” human workers) Vance responded sanguinely, arguing that ATM machines didn’t eliminate bank tellers. Indeed, Vance suggested, “we have more bank tellers today than we did when the ATM was created, but they’re doing slightly different work…”
There are two interesting things about what Vance said, both relating to the example that he chose about bank tellers and ATMs.
The first thing is what it tells us about who J. D. Vance is. The bank teller story—how ATMs were predicted to increase bank teller unemployment, but in fact did not—isn’t a story you’ll hear from politicians; in fact, for a long time, Barack Obama would claim, incorrectly, that ATMs had decreased the number of bank tellers, in order to suggest that the elevated unemployment rate during his presidency was due to productivity gains from technology. I’ve never heard a politician cite the bank teller story before: but I have seen the bank teller story cited in a lot of blogs. I’ve seen it cited, for example, by Scott Alexander and Matt Yglesias and Freddie deBoer; and I’ve heard it, upstream of the humble bloggers, from such fine economists as Daron Acemoglu and David Autor. The story of how ATMs didn’t automate bank tellers is, indeed, something of a minor parable of the economics profession…
… But the other thing about the bank teller story that Vance cites is that it’s wrong. We do not, contrary to what Vance claims, have “more bank tellers today than we did when the ATM was created”: we in fact have far fewer. The story he tells Douthat might have been true in 2000 or 2005, but it hasn’t been true for years. Bank teller employment has fallen off a cliff. Here is a graph of bank teller employment since 2000:
So what happened to bank tellers? Autor, Bessen, Vance, and the like are right to point out that ATMs did not reduce bank teller employment. But they miss the second half of the story, which is that another technology did. And that technology was the iPhone. The huge decline in bank teller employment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made possible.
But why? Why did the ATM, literally called the automated teller machine, not automate the teller, while an entirely orthogonal technology—the iPhone—actually did?
The answer, I think, is complementarity.
In my last piece, on why I don’t think imminent mass job loss from AI is likely, I talked a lot about complementarity. The core point I made was that labor substitution is about comparative advantage, not absolute advantage: the relevant question for labor impacts is not whether AI can do the tasks that humans can do, but rather whether the aggregate output of humans working with AI is inferior to what AI can produce alone. And I suggested that given the vast number of frictions and bottlenecks that exist in any human domain—domains that are, after all, defined around human labor in all its warts and eccentricities, with workflows designed around humans in mind—we should expect to see a serious gap between the incredible power of the technology and its impacts on economic life.
That gap will probably close faster than previous gaps did: AI is not “like” electricity or the steam engine; an AI system is literally a machine that can think and do things itself. But the gap exists, and will exist even as the technology continues to amaze us with what it can now accomplish.
But by talking about why ATMs didn’t displace bank tellers but iPhones did, I want to highlight an important corollary, which is that the true force of a technology is felt not with the substitution of tasks, but the invention of new paradigms. This is the famous lesson of electricity and productivity growth, which I’ll return to in a future piece. When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant…
[Oks unpacks the stories of the ATM’s and iPhone’s impact on banking, then looks ahead, by anaology, to what might be in store with AI. He concludes…]
… I am not a “denier” on the question of technological job loss; Vance’s blithe optimism is not mine. But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.
The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm. At some point in the life of every technology, old workflows are replaced by new ones, and we discover the paradigms in which the full productive force of a technology can best be expressed. In the past this has simply been a fact of managerial turnover or depreciation cycles. But with AI it will likely be the sheer power of the technology itself, which really is wholly unlike anything that has come before, and unlike electricity or the steam engine will eventually be able to build the structures that harness its powers by itself.
I don’t think we’ve really yet learned what those new structures will look like. But, at the limit, I don’t quite know why humans have to be involved in those: though I suspect that by the time we’re dealing with the fully-automated organizations of the future, our current set of concerns will have been largely outmoded by new and quite foreign ones, as has always been the case with human progress.
But, however optimistic I might be about the human future, I don’t think it’s worth leaning on the history of past technologies for comfort. The ATM parable is a comforting narrative; and in times of uncertainty and fear we search naturally for solace and comfort wherever it may come. But even when it comes to bank tellers, it’s only the first half of the story…
Eminently worth reading in full: “Why ATMs didn’t kill bank teller jobs, but the iPhone did.”
As to whether the wisdom of Amara and Oks is widely-shared, consider this from Crunchbase:
Crunchbase data shows global venture investment totaled $189 billion in February — the largest startup funding month on record — although 83% of capital raised went to just three companies. They include OpenAI, which raised $110 billion, also in the largest round ever raised by a private, venture-backed company.
The record month for venture funding took place against the backdrop of a trillion-dollar stock market drop as AI compute and tooling unsettled leading public software companies. [See also here.]
All told, venture investment was up close to 780% year over year from the $21.5 billion raised by startups in February 2025.
OpenAI was not the only company to raise tens of billions of dollars last month. Its closest rival, Anthropic, raised $30 billion, marking the third-largest venture round on record.
Waymo, Alphabet‘s self-driving division, raised $16 billion. Together, those three rounds totaled $156 billion, representing 83% of the global venture capital raised in February.
A further four companies each raised $1 billion or more last month: Tokyo-based semiconductor manufacturer Rapidus; London-based self-driving platform Wayve; San Francisco-based AI for robotics World Labs; and Sunnyvale, California-based AI semiconductor company Cerebras Systems.
These massive rounds were led by strategic corporate investors, a host of private equity and alternative investors, as well as a few multistage venture investors and a government agency…
– “Massive AI Deals Drive $189B Startup Funding Record In February While Public Software Stocks Reel“
As Carlota Perez explains in Technological Revolutions and Financial Capital, we’re forever blowing bubbles…
* Neil Postman
###
As we contemplate change, we might send sanitary, odor-free birthday greetings to Sir Joseph William Bazalgette; he was born on this date in 1819. A civil engineer, he became chief engineer of London’s Metropolitan Board of Works, in which role his major achievement was a response to the “Great Stink of 1858,” in July and August of 1858, during which very hot weather exacerbated the smell of untreated human waste and industrial effluent. Bazalgette oversaw the creation of a sewer network for central London which addressed the problem– and was instrumental in relieving the city from cholera epidemics, in beginning the cleansing of the River Thames, and in creating (a crucial part of) the infrastructure that underlay its extraordinary growth over the next century.
“I think it would be a very good idea”*…

As we discuss global culture(s) or geo-politics, we often talk about “The West” (and the rest). In a review of Georgios Varouxakis‘ new book The West: The History of an Idea, Andrew Kaufmann reminds us that it’s important to interrogate that defining concept…
What is the West? Many take the idea for granted, but few can define it. In this meticulously researched, engaging, and sometimes bewildering new book, The West: The History of an Idea, intellectual historian Georgios Varouxakis takes readers on a two-centuries-long tour of the many uses, definitions, and redefinitions of the term. Along the way, readers may find their own long-held assumptions and stereotypes challenged and even undermined.
The book makes a number of arguments, but for the purposes of this review, it’s worth focusing on just a few major ones. The first and most innovative argument of the book is this: The idea of the West as a transnational sociopolitical community distinct from the rest of the world is more recent than we think. This idea received its first sophisticated and coherent articulation in the 1820s from French philosopher Auguste Comte.
While historians and other academics had long looked to past societies like ancient Athens or medieval Europe as representing the “West” against some “other,” Comte was the first to coherently put together a future-oriented political program to be adopted and followed. Most scholars locate the future-focused version of the West’s inauguration in the 1890s, when the idea was used to justify imperial and colonial expansion. By contrast, Varouxakis argues that Comte and his followers wanted to build a West that was anti-imperialist, committed to science and reason, liberated from dogmatic Christianity, and fueled by altruism and sympathy.
As a progressive positivist, Comte saw the “Western Republic” as a via media between a hyper nationalism (of the French variety) and an overly abstract universalism. He imagined a way station that transcended the parochialism of family and nation and would one day be realized and embraced all over the world, even if it would take a full seven centuries from his own writing to come to fruition (that was Comte’s timeline). Neither tied to a particular nation like France (although Paris would be the center of this Republic until Constantinople would replace it), nor embodied by an abstract and universal cosmopolitanism, the Western Republic (or l’Occident) would be set off against its Other—in particular, Russia and the Orient. Still, over time this republic would non-coercively welcome the rest of the world into its fold.
Contrary to a common conception of “the West,” it was not to be a society (or society of societies) committed to democracy, individualism, or liberalism. It was instead a rejection of the hyper-individualism of the modern period, and it was an attempt to recover an older other-centered ethic that had been lost to a prior age.
The second major argument Varouxakis presents is that despite this idea of a transnational West that had its origin in Comte’s work, and despite Comte’s legacy that his disciples clearly carried across continents and centuries, the history of the idea of the West since Comte is complicated and contested. Put another way, while the specter of Comte hovers over the entire narrative, his vision is not always fully realized, nor is the meaning of the term always stable. This complicated history manifests itself in a number of different ways and carries with it some significant implications…
… Many casual users of “Western Civilization” will often identify it as one and the same with liberal democracy. They often find that somehow and at some point Britain came to embrace the West as being just that—liberal and democratic. Varouxakis complicates this picture by showing that while a few liberal voices in Britain were certainly also champions of Western Civilization, the more consistent and coherent users of the term were disciples of Comte and therefore much more illiberal in their thinking…
… Or take the more familiar East vs. West framework we associate with the Cold War, where surely the fault lines of Eastern totalitarianism against Western liberal capitalism are clean and clear. But even here the history is complicated, as the period begins with the acknowledgement that it was indeed Soviet Russia that helped to save “western civilization.” Indeed, it took forty years of gradual evolution for the idea of the “West” to finally crystallize around the shared commitment to economic, religious, and political freedom over and against Soviet planned economies, state-sanctioned atheism, and one-party politics with no free and fair elections…
… Given the winding road of the history of the West, it is instructive that there seems to be something of a settlement on its meaning for today, even if there are differences in its application. This can be seen most clearly in Varouxakis’ penultimate chapter on the dispute between Samuel Huntington and Francis Fukuyama after the end of the Cold War. Fukuyama of course is well known for his view that the West—in its embrace of liberal democracy and capitalism—had now emerged triumphant over the defeated ideas of Marxist totalitarianism, which found its fullest expression in Soviet Russia of the East.
Samuel Huntington’s ideas of what the West embodied were not much different, but he diverged from Fukuyama in his vision of what the world’s future likely entailed. For Huntington, the coming years and decades would see a “clash of civilizations,” a conflict of the most basic sort between the West and the great civilizations of the world as we know it. He saw nothing certain about the global triumph of any particular civilizational expression, including the West. Indeed, Huntington contends that it is only the West that even believes in universal ideals, and that all of the non-Western civilizations—whether Chinese, Islamic, or otherwise—are all partial in their visions. Therefore, we see here in the latest debate about the West a return of the Comtean question: Will the West become a universal civilization, or will it endure as one of many civilizations forever in conflict with each other? While we may have some agreement on what the West stands for, we may have less confidence in its future in the world.
The history is complex, indeed. But Varouxakis also raises the question of whether Western Civilization—however one defines it—is something to defend in the first place. He considers this question several times in the book, but perhaps none more poignantly than in the Great War itself. For example, there were many who noted the hypocrisy of the “Western powers” that suddenly found common cause with the long-excluded Russia in their fight against Germany and the Central Powers. But perhaps more troubling is what it says about a civilization when it produces not the peace and altruism long promised by its founder, but instead destruction on a scale that had never been seen before in human history. One could likewise ask: What kind of civilization deliberately excludes and exploits the weakest members within its borders, such as in the treatment of African Americans in the United States and of those in the furthest regions of the colonial empires of Europe? This crisis of confidence and feeling of decline continued through the interwar years, as Oswald Spengler expresses in his Decline of the West, a fitting rejoinder to the optimism of Comte’s Western utopia.
And so, perhaps the best way to conclude for readers of all sorts—but especially Christians—is to offer two words of caution. The first is to those who would defend the “West” and “Western Civilization” as something either resonant with or even inspired by a Judeo-Christian worldview. And that word is simple: the origins of the idea of the West in one of its most dominant forms (the Comtean one) and in its subsequent historical uses is either non-Christian or even anti-Christian. Indeed, I went into the book expecting a heavy dose of Judeo-Christian connections to the idea of the West, and while the link is not completely absent, I was struck by its muted nature.
Besides the post-Christian progressive vision of Comte himself, consider the voice of Black writer Richard Wright as one representative example to follow in the Frenchman’s footsteps. As someone who identified with the West, he considered “the content of [his] Westernness [residing] fundamentally…in [his] secular outlook upon life.” The progress of the West would be realized the more it emancipated itself from the influence of “mystical powers” or the priests who would speak in their name. Armed with the tools of trial-and-error pragmatism, human life can be sustained without recourse to divine help. A West liberated from divine help is a West worth preserving, at least according to Wright.
Overall, the West as an idea has many champions who are quite open in their antipathy toward the Christian religion, and it would be foolish to ignore those influences on the meaning and use of the term for us today. Still, the second and final note I’d like to offer is a bit more optimistic. In the concluding chapter, Varouxakis urges readers to move from the parochialism of “Western” ideas to adopt a language that is universal in its appeal. What, after all, was so attractive about any of the Western projects that Varouxakis so painstakingly chronicles? It was always their global appeal.
Altruism, sympathy, love for others, freedom, individualism, democracy, capitalism. These are not ideals that belong to just a few but rightfully can be embraced by all of God’s creatures in different places, at different times, and in different ways. Certainly for Christians who embrace a global faith, the least we can do is see the inheritance of the “West,” however defined, as a mixed bag of common grace insights and ideas in rebellion against God, combined with the perspective that none of what is worth keeping in the West should ever be kept from those who would embrace its ideals…
Eminently worth reading in full: “The Idea of the West” from @mereorthodoxy.bsky.social.
For a look at the concept in current context/practice: “The Rest take on the West,” from @noemamag.com.
* Gandhi’s response when asked, “what do you think of western civilization?”
###
As we ponder perplexingly plastic paradigms, we might recall that it was on this date in 1957 that “Whole Lotta Shakin’ Going On” by Jerry Lee Lewis peaked at #3 on the US pop singles charts (though it topped the R&B and country charts shortly after). It was a cover of a 1955 release by Big Maybelle of a song written by Dave “Curlee” Williams (and sometimes also credited to James Faye “Roy” Hall). Lewis, with session drummer Jimmy Van Eaton and guitarist Roland Janes, had recorded the song at Sun Records in just one take.
“There is often a decades-long time lag between the development of powerful new technologies and their widespread deployment”*…
Jerry Neumann explores the relevance of Carlota Perez‘s thinking (her concept of Techno-Economic Paradigm Shifts and theory of great surges, which built on Schumpeter’s work on Kondratieff waves) to the socio-economic moment in which we find ourselves…
I’ve been in the technology business for more than thirty years and for most of that time it’s felt like constant change. Is this the way innovation progresses, a never-ending stream of new things?
If you look at the history of technological innovation over the course of decades or centuries, not just years, it looks completely different. It looks like innovation comes in waves: great surges of technological development followed by quieter periods of adaptation.
The past 240 years have seen four of these great surges and the first half of a fifth…
Economist Carlota Perez in her 2002 book Technological Revolutions and Financial Capital puts forward a theory that addresses the causes of these successive cycles and tries to explain why each cycle has a similar trajectory of growth and crisis. Her answers lie not just in technological change, but in the social, institutional, and financial aspects of our society itself…
Perez’ theory divides each cycle into two main parts: the installation period and the deployment period. Installation is from irruption to the crisis, and deployment is after the crisis. These are the ying and the yang of the cycle. Some of the differences between the two periods we’ve already mentioned—creative destruction vs. creative construction, financial capital vs. production capital, the battle of the new paradigm with the old vs. acceptance of the new TEP, etc…
We like theory because it tells us why, but more than that, a good theory is predictive. If Perez’ theory is correct, it should allow us to predict what will happen next in the current technological cycle…
A crisp distillation of Perez’s thinking and a provocative consideration of its possible meaning for our times: “The Age of Deployment,” from @ganeumann.
* Carlota Perez (@CarlotaPrzPerez)
###
As we ride the waves, we might recall that it was on this date in 1901, 11 years after the suicide of Vincent Van Gogh (and as his vision and its impact flowered in its “Deployment Age”) a large retrospective of his work (71 paintings) was held at the Bernheim-Jeune Gallery. It captured the excitement of André Derain and Maurice de Vlaminck— and thus contributed to the emergence of Fauvism.

“The idea that there might be limits to growth is for many people impossible to imagine”*…
At some level, we all know that nothing lasts forever…
In 1972, a team of MIT scientists got together to study the risks of civilizational collapse. Their system dynamics model published by the Club of Rome identified impending ‘limits to growth’ (LtG) that meant industrial civilization was on track to collapse sometime within the 21st century, due to overexploitation of planetary resources…
The report, authored by Donella Meadows and colleagues (working for Jay Forrester and the Club of Rome), was controversial from its release, with many pundits (often with sponsorship of mining, chemical, and petroleum companies)suggesting that the report’s logic’s flawed. But as scientists like Graham Turner of CSIRO observed in “A Comparison of the Limits to Growth with Thirty Years of Reality” just after after the turn of the century (summarized and updated here), the MIT team’s projections were alarmingly on track. A new study suggests that the LtG projections are holding still…
The analysis has now received stunning vindication from a study written by a senior director at professional services giant KPMG, one of the ‘Big Four’ accounting firms as measured by global revenue.The study was published in the Yale Journal of Industrial Ecology in November 2020 and is available on the KPMG website. It concludes that the current business-as-usual trajectory of global civilization is heading toward the terminal decline of economic growth within the coming decade—and at worst, could trigger societal collapse by around 2040.
The study represents the first time a top analyst working within a mainstream global corporate entity has taken the ‘limits to growth’ model seriously. Its author, Gaya Herrington, is Sustainability and Dynamic System Analysis Lead at KPMG in the United States. However, she decided to undertake the research as a personal project to understand how well the MIT model stood the test of time.
The study itself is not affiliated or conducted on behalf of KPMG, and does not necessarily reflect the views of KPMG. Herrington performed the research as an extension of her Masters thesis at Harvard University in her capacity as an advisor to the Club of Rome. However, she is quoted explaining her project on the KPMG website as follows:
“Given the unappealing prospect of collapse, I was curious to see which scenarios were aligning most closely with empirical data today. After all, the book that featured this world model was a bestseller in the 70s, and by now we’d have several decades of empirical data which would make a comparison meaningful. But to my surprise I could not find recent attempts for this. So I decided to do it myself.”
Titled ‘Update to limits to growth: Comparing the World3 model with empirical data’, the study attempts to assess how MIT’s ‘World3’ model stacks up against new empirical data. Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments. However, the last study of this nature [Graham Turner’s update, as above] was completed in 2014.
Herrington’s new analysis examines data across 10 key variables, namely population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint. She found that the latest data most closely aligns with two particular scenarios, ‘BAU2’ (business-as-usual) and ‘CT’ (comprehensive technology).
“BAU2 and CT scenarios show a halt in growth within a decade or so from now,” the study concludes. “Both scenarios thus indicate that continuing business as usual, that is, pursuing continuous growth, is not possible. Even when paired with unprecedented technological development and adoption, business as usual as modelled by LtG would inevitably lead to declines in industrial capital, agricultural output, and welfare levels within this century.”
Study author Gaya Herrington told Motherboard that in the MIT World3 models, collapse “does not mean that humanity will cease to exist,” but rather that “economic and industrial growth will stop, and then decline, which will hurt food production and standards of living… In terms of timing, the BAU2 scenario shows a steep decline to set in around 2040.”…
“MIT Predicted in 1972 That Society Will Collapse This Century. New Research Shows We’re on Schedule.” The headline notwithstanding, The MIT team’s study didn’t so much make predictions as it played out a systems dynamics model in order to identify issues that might emerge. And like any model, theirs was rooted in assumptions that could/should have eroded over the last 50 years… which makes the fact that “reality” seems to be tracing the contours thatchy sketched even more notable. Time to revisit those assumptions… Bracing– but important– reading.
[Image above: source]
* Donella Meadows
###
As we get serious, we might send systemic birthday greetings to Thomas Samuel Kuhn; he died on this date in 1996. A physicist, historian, and philosopher of science, Kuhn believed that scientific knowledge didn’t advance in a linear, continuous way, but via periodic “paradigm shifts.” Karl Popper had approached the same territory in his development of the principle of “falsification” (to paraphrase, a theory isn’t false until it’s proven true; it’s true until it’s proven false). But while Popper worked as a logician, Kuhn worked as a historian. His 1962 book The Structure of Scientific Revolutions made his case; and while he had– and has— his detractors, Kuhn’s work has been deeply influential in both academic and popular circles (indeed, the phrase “paradigm shift” has become an English-language staple).
“What man sees depends both upon what he looks at and also upon what his previous visual-conception experience has taught him to see.”
Thomas S. Kuhn, The Structure of Scientific Revolutions










You must be logged in to post a comment.