Posts Tagged ‘progress’
“Don’t eat your seed corn”*…
AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…
We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.
The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.
But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.
So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.
Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…
[Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]
… What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.
Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.
The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”
That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…
[Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]
… If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.
This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…
[Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]
… The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy. It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.
By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.
Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.
The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…
… The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.
The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.
The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.
Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.
None of these individual acts is catastrophic. However, their compound effect may be.
The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.
Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.
In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.
The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…
Eminently worth reading in full: “The Social Edge of Intelligence.”
Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“
Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.
And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”
* Old agricultural proverb
###
As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics.
The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.
“Where all think alike there is little danger of innovation”*…
Last week, Northwestern Professor Joel Mokyr was awarded a half-share in The Nobel Prize in Economic Sciences (AKA The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel) “for having identified the prerequisites for sustained growth through technological progress.” Anton Howes explains why this is noteworthy…
Among today’s winners of the Nobel prize in Economics is Joel Mokyr, the professor at Northwestern whose name is indelibly associated with the primacy of innovation to modern economic growth – the gradual, sustained, and unprecedented improvement in living standards that first Britain, and then country after country, have enjoyed over the past few hundred years. It was reading Mokyr’s The Enlightened Economy that first opened my eyes to the importance of studying the history of invention to explaining the causes of the Industrial Revolution, which I have since made my career.
What makes this Nobel win so remarkable, and so pleasantly surprising, is that Mokyr’s work is not the kind that is often published by economics journals, or even many economic history journals anymore. Over the past few decades, journal editors and peer-reviewers have increasingly insisted that papers must present large datasets that have been treated using complex statistical methods in order to make even the mildest claims about what caused what. Although Mokyr is a master of such methods – he was one of the early pioneers of economic history’s quantitative turn – the work for which he has won the prize is firmly and necessarily qualitative.
Mokyr’s is the economic history that gets written up in books – his classics are The Lever of Riches, The Gifts of Athena, The Enlightened Economy, and A Culture of Growth – and in readable papers shorn of unnecessary formulae. His is history accessible to the layman, though rigorously applying the insights of economics. The prize is a clear signal from the economics profession that it doesn’t just value the application of fancy statistical methods; its highest prize can go to works of history.
Whereas most of the public, and even many historians, think of the causes of modern economic growth – the beginnings of the Industrial Revolution – as being rooted in material factors, like conquest, colonialism, or coal, Mokyr tirelessly argued that it was rooted in ideas, in the intellectual entrepreneurship of figures like Francis Bacon and Isaac Newton, and in the uniquely precocious accumulation in eighteenth-century Britain of useful, often mechanically actionable knowledge. Britain, he argued, through its scientific and literary societies, and its penchant for publications and sharing ideas, was the site of a world-changing Industrial Enlightenment – the place where progress was thoughtpossible, and then became real.
One of Mokyr’s big early insights, first appearing in Lever of Riches, was that many inventions could not be predicted by economic factors. Society could enjoy remarkable productivity improvements from simply increasing the size of the market, leading to division of labour and specialization – what he labelled ‘micro-inventions’ – in the vein popularised by Adam Smith. But this could not explain an invention that appeared out of the blue, like Montgolfier’s hot air balloon in the 1780s – what he called a ‘macro-invention’, not for the magnitude of its impact, but for its novelty. Macro-inventions often required further development to make them important, but the original breakthrough could not be predicted by looking at changes in prices or the availability of resources. It ultimately came down to advances in our understanding of the world. Mokyr put the Scientific Revolution – and the factors that contributed to it – on the economist’s map.
Mokyr also looked at the relationship between different kinds of knowledge. A scientist might know, through observation, that the air has a weight. A craftsman might know, through long training and experience with glass, how to make a long glass tube. Each could not get far alone. But combining them, by creating means to ensure that scientists and craftsmen talked with one another and collaborated – through connecting their propositional and prescriptive knowledge, their heads and hands – very quickly led to the invention of thermometers, barometers, and much more besides, in an ever expanding field of knowledge. What Mokyr taught economists is that it’s not knowledge per se that makes the difference, but the way it is organized. Much of his later work has shown just how deep a pool Britain’s scientists could draw on, of skilled artisans.
In a way, Mokyr himself has practised what he preached. As editor of Princeton University Press’s book series on the Economic History of the Western World, Mokyr has for decades provided an all-important space for economists and historians to write the kinds of research that would never have been publishable in economics journals – including of explanations of the Industrial Revolution that are the polar opposite to his own. He helped keep the connection between history and economics alive.
Mokyr’s case for the primacy of knowledge and ideas was not an easy one to make to economists. They are naturally drawn to data that can be counted, and not to narrative, often no matter how well evidenced. But it appears that Mokyr’s persistence, elevated by his infectious, irrepressible sprightliness, has paid off. His prize is a long overdue recognition of the historyin economic history, and a remarkable testament to the power of ideas to persuade…
A triumph for history and the importance of ideas: “Joel Mokyr’s Nobel,” from @antonhowes.bsky.social.
See also: “Why Joel Mokyr deserves his Nobel prize,” gift article from The Economist.
* Edward Abbey, Desert Solitaire
###
As we ponder the process of progress, we might send creative birthday greetings to one of the subjects Mokyr’s study, Sir Christopher Wren; he born on this date in 1632. A mathematician and astronomer (who co-founded and later served as president of the Royal Society), he is better remembered as one of the most highly acclaimed English architects in history; he was given responsibility for rebuilding 52 churches in the City of London after the Great Fire in 1666, including what is regarded as his masterpiece, St. Paul’s Cathedral, on Ludgate Hill.
Wren, whose scientific work ranged broadly– e.g., he invented a “weather clock” similar to a modern barometer, new engraving methods, and helped develop a blood transfusion technique– was admired by Isaac Newton, as Newton noted in the Principia.

“Great minds think alike”*…

Brian Potter on the (perhaps surprising) frequency with which “heroic” inventors are in fact better understood as the winners of close races…
When Alexander Graham Bell filed a patent for the telephone on February 14th, 1876, he beat competing telephone developer Elisha Gray to the patent office by just a few hours. The resulting legal dispute between Bell Telephone and Western Union (which owned the rights to Gray’s invention) would consume millions of dollars before being resolved in Bell’s favor in 1879.
Such cases of multiple invention are common, and some of the most famous and important modern inventions were invented in parallel. Both Thomas Edison and Joseph Swan patented incandescent lightbulbs in 1880. Jack Kilby and Robert Noyce patented integrated circuits in 1959. Hans von Ohain and Frank Whittle independently invented the jet engine in the 1930s. In a 1922 paper, William Ogburn and Dorothy Thomas documented 150 cases of multiple discovery in science and technology. Robert Merton found 261 examples in 1961, and observed that the phenomenon of multiple discovery was itself a multiple discovery, having been described over and over again since at least the early 19th century.
But exactly how common is multiple invention? The frequency of examples suggests that it can’t be particularly rare, but that doesn’t tell us the rate at which it occurs. In “How Common is Independent Discovery?,” Matt Clancy catalogues several attempts to estimate the frequency of multiple discovery, and tentatively comes up with a frequency of around 2-3% for simultaneous scientific discoveries, and perhaps an 8% chance that a given invention will be reinvented in the next decade. But the evidence for inventions is somewhat inconsistent, and varies greatly between studies. Clancy estimates a reinvention rate of around 8% per decade, but another study he found that looked at patent interference lawsuits between 1998 and 2014 suggests an independent invention rate of only around 0.02% per year.
The frequency of multiple invention is a useful thing to know, because it can give us clues about the nature of technological progress. A very low rate of multiple invention suggests that progress might be driven by a small number of “genius” inventors (what we might call the Great Man Theory of technological progress), and that it might be highly historically contingent (if you re-rolled the dice of history, maybe you get a totally new set of inventions and a different technological palette). A high rate of multiple invention suggests that progress is more a function of broad historical forces (that inventions appear when the conditions are right), and that progress is less contingent (if you re-rolled the dice of history, you’d get a similar progression of inventions). And if the rate of multiple invention is changing over time, perhaps the nature of technological progress is changing as well…
[Potter reviews the history and concludes that “multiple invention was extremely common”…]
… My main takeaway is that the ideas behind inventions are often in some sense “obvious,” or at least not so surprising or unexpected that many people won’t think of them. In some cases, this is probably because once some new possibility comes along, lots of people think of similar things that could be done with it. Once the properties of electricity began to be understood, many people came up with the idea of using it to send signals (telephone, telegraph), or to create motion (engines and generators), or to generate light (arc lamps, incandescent lights). Once the steam engine came along, lots of people had the idea to use it to power various types of vehicles.
In other cases, multiple invention probably occurs because important problems will attract many people trying to solve them. Steel corrosion was a large problem inspiring many folks to look for ways to create a steel that didn’t rust, or notice the potential value if they stumbled across such a material. Lamps causing mine fires were a major problem, inspiring many people to come up with ideas for safety lamps. The smoke produced by gunpowder was a major problem, inspiring many efforts to develop smokeless powders. And because would-be inventors will all draw from the same pool of available technologies, materials, and capabilities when coming up with a solution, there will be a large degree of convergence in the solutions they come up with…
Fascinating: “How Common is Multiple Invention?” from @const-physics.blogsky.venki.dev.
* common idiom
###
As we reconsider credit, we might recall that it was on this date in 1661 that Isaac Newton— a key figure in the Scientific Revolution and the Enlightenment that followed– entered Trinity College, Cambridge. Soon after Newton obtained his BA degree at Cambridge in August 1665, the university temporarily closed as a precaution against the Great Plague. Although he had been undistinguished as a Cambridge student, his private studies and the years following his bachelor’s degree have been described as “the richest and most productive ever experienced by a scientist.”
Relevantly to the piece above, Newton was party to a dispute with Gottfried Wilhelm Leibniz (who started, at age 14, at the University of Leipzig the same year that Newton matriculated at Cambridge) over which of them developed calculus– called “the greatest advance in mathematics that had taken place since the time of Archimedes.” The modern consensus is that the two men independently developed their ideas.

“The street finds its own uses for things”*…
Your correspondent is off again, this time across borders and for a little longer that my last few absences; regauler service should resume around April 19…
The estimable Matt Webb on an approach to thnking more comprhensively and creatively about the ultimate impacts of and given innovation…
… I recently learnt about twig, which is a biotech startup manufacturing industrial chemicals using custom bacteria.
The two examples they cite: palm oil which is used in lipstick but displaces rainforests; isoprene which is used to make tyres but comes from fossil fuels.
What if instead you could engineer a strain of bacteria to bulk produce these chemicals sustainably?
The capabilities are present in the metabolic pathways. So that’s what twig does. At scale, is the promise.
- I hadn’t realised this kind of biotech had gotten to commercialisation! And in London too. Good stuff.
- What Are The Civilian Applications?
What Are The Civilian Applications? is of course a Culture ship name, a GSV (General Systems Vehicle) from The Use of Weapons by Iain M. Banks.
It is also an oblique strategy we deployed regularly in design workshops back in the day at BERG, introduced (I think? Gang please correct me if I’m wrong) by long-time design leader and friend Matt Jones. That’s his project history. Go have a read.
Let me unpack.
Oblique Strategies (a history) by Brian Eno and Peter Schmidt, 1975: a deck of approx 100 cards, each of which is a prompt to bump you out of a creative hole.
For example:
Honor thy error as a hidden intentionOr:
Discard an axiomAnd so on.
In product invention, which is kinda what we did at BERG and kinda what I do now, it’s handy to carry your own toolkit of prompts. So I adopted What Are The Civilian Applications? into my personal deck of oblique strategies.
Therefore.
What would do you with engineered bacteria that can make palm oil or whatever, if it were cheap enough to play with, if the future were sufficiently distributed, if we all had it at home?
Like, it’s a good question to ask. What would civilians do with engineered bacteria?
Tomato soup.
Instead of buying tomato soup at the store, I’d have a little starter living in a jar. A bioreactor all of my own, and I’d fill it with intelligently designed bacteria that eat slop and excrete ersatz Heinz tomato soup.
I’m not 100% sure what “slop” is in this context. The food I mean. Maybe the bacteria just get energy from sunlight, fix carbon from the air, and I drop in a handful of vitamin gummies or fish flakes every Monday?
A second oblique strategy adopted into my personal deck over the years:
“
A good science fiction story should be able to predict not the automobile but the traffic jam,” by Frederik Pohl. As previously discussed re a national drone network.Let’s say I can go to the store and buy a can of Perpetual Heinz, or however they brand it. A can with a sunroof on the top and a tap on the side that I keep in the garden and I can juice it for soup once a week for a year, or until the bacterial population diverges enough that I’m at risk of brewing neurotoxins or psychedelics or strange and wonderful new flavours or something.
Heinz is not going to like that, economically. They’ll require me to enrol in some kind of printer and printer ink business model where I have to subscribe to the special vitamin pills to keep (a) the soup colony alive and (b) their shareholders happy.
Which will end up being pricey, like the monthly cash we all pay out to mutually incompatible streaming services. Demand will arise for black market FMCGs on the dark web. Jars of illegal Infinite Coca Cola that only requires the cheap generic slop and it tastes just the same.
So I love to play with these strategies and imagine what the world might be like. Each step makes a sort of sense yet you end up somewhere fantastical – that’s the journey I want to take you on in text, too. Then the game, in product invention, is to take those second order possibilities and bring them back to today. (I’m giving away all my secrets now.)
But I prefer cosier, more everyday futures:
Grandma’s secret cake recipe, passed down generation to generation, could be literally passed down: a flat slab of beige ooze kept in a battered pan, DNA-spliced and perfected by guided evolution by her own deft and ancient hands, a roiling wet mass of engineered microbes that slowly scabs over with delicious sponge cake, a delectable crust to be sliced once a week and enjoyed still warm with cream and spoons of pirated jam.
A small jar of precious, proprietary cake ooze handed down parent to child, parent to child, together with a rack filled with the other family starter recipes, a special coming of age moment, a ceremony…
Thinking broadly and deeply about the implications of innovations: “What Are The Civilian Applications?” from @genmon.fyi.
(Image above: source)
* William Gibson
###
As we ponder the particulars of progress, we might spare a thought for Francis Bacon– the English Renaissance philosopher, lawyer, linguist, composer, mathematician, geometer, musician, poet, painter, astronomer, classicist, philosopher, historian, theologian, architect, father of modern empirical science (The Baconian– aka The Scientific– Method), and patron of modern democracy, whom some allege was the illegitimate son of Queen Elizabeth I of England (and other’s, the actual author of Shakespeare’s plays). He died on this date in 1561… after (about a month earlier) he had stuffed a dressed chicken with snow to see how long the flesh could be preserved by the extreme cold. He caught a cold and perished from its complications.








You must be logged in to post a comment.