Posts Tagged ‘progress’
“We hear all this talk about integrating the world economically, but there is an argument to be made for not integrating the world economically”*…
… and indeed, those arguments seem to be holding increasing sway. Tyler Cowan ponders the possible economic implications of a future in which global economic interdependence recedes– a future in which globe’s economies, freer of each other, don’t rise and fall with each other (as they largely have for decades) to the same extent…
Will we see less co-movement in global economic growth?
That is the question behind my latest Bloomberg column (soft pay wall). China is now, and looking forward, less of a common growth driver around the world. Oil price shocks may not be less important for humanitarian outcomes, but they matter less for many of the largest economies. America is now an oil exporter, and the EU just made some major adjustments in response to the Russia shock. More renewable energy is coming on-line, most of all solar.
The column closes with this:
In this new world, with these major common shocks neutered, a country’s prosperity will be more dependent on national policies than on global trends. Culture and social trust will matter more too, as will openness to innovation — and, as fertility rates remain low or decline, so will a country’s ability to handle immigration. A country that cannot repopulate itself with peaceful and productive immigrants is going to see its economy shrink in relative terms, and probably experience a lot of bumps on the way down.
At the same time, excuses for a lack of prosperity will be harder to come by. The world will not be deglobalized, but it will be somewhat de-risked.
Dare we hope that these new arrangements will produce better results than the old?
Or perhaps a more general rising tide was the only way many countries were going to make progress?
Marginal Revolution
Byrne Hobart reflects further…
When economies were tightly linked, growth in the US led to more demand for manufactured goods from China, which created more demand for raw materials from other parts of the developing world. But if that link is weaker, it’s entirely possible for there to be a boom in some places and a bust elsewhere. That probably increases the personal returns from global macro investing while decreasing its social return: when the world is closely-linked, there are massive positive externalities in predicting recessions, because there are so few places to hide. It’s comparatively less essential for the world to know that German is slowing down but growth in Indonesia is picking up, but it also means that macro questions are more tractable.
The Diff
* Arundhati Roy
###
As we think tectonically, we might recall that it was on this date in 1865 that the U.S. first issued Gold Certificates.
Americans began to move out west in the first half of the 19th century. Banks started printing their own money to fund land purchases, and that quickly led to two problems: loose money-printing had a volatile effect on prices, and it became increasingly hard to tell what was counterfeit from what wasn’t.
To tackle these problems, the government decreed in the 1830s that it would only accept transactions in gold and silver. But of course, lugging metals around is nobody’s idea of fun. So in 1863, Congress paved the way for the first “gold certificates” to be printed two years later, in November 1865.
A gold certificate was, in effect, a form of paper currency backed by gold – although not entirely. The Treasury was allowed to issue $120 in gold certificates for every $100-worth of gold it held in its vaults…
MoneyWeek

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”*…
It’s very hard, historian of science Benjamin Breen explains, to understand the implications of a scientific revolution as one is living through it…
2023 is shaping up to be an important year in the history of science. And no, I’m not talking about the reputed room-temperature semiconductor LK-99, which seems increasingly likely to be a dud.
Instead, I’m talking about the discoveries you’ll find in Wikipedia’s list of scientific advances for 2023. Here are some examples:
• January: Positive results from a clinical trial of a vaccine for RSV; OpenAI’s ChatGPT enters wide use.
• February: A major breakthrough in quantum computing; announcement of a tiny robot that can clean blood vessels; more evidence for the ability of psychedelics to enhance neuroplasticity; major developments in biocomputers.
• March: OpenAI rolls out GPT-4; continued progress on mRNA vaccines for cancer.
• April: NASA announces astronaut crew who will orbit the moon next year; promising evidence for gene therapy to fight Alzheimer’s.
• May: Scientists use AI to translate brain activity into written words; promising results for a different Alzheimer’s drug; human pangenome sequenced (largely by a team of UCSC researchers — go Banana Slugs!); more good news about the potential of mRNA vaccines for fighting cancer.
And skipping ahead to just the past two weeks:
• nuclear fusion ignition with net energy gain was achieved for the second time
• a radical new approach to attacking cancer tumors entered Phase 1 trials in humans
• and — announced just as I was writing this [in August, 2023] — one of the new crop of weight loss drugs was reported to cut rates of heart attack and stroke in high-risk individuals by 20% (!).
Also in January of 2023: the New York Times asked “What Happened to All of Science’s Big Breakthroughs?”
The headline refers to an article published in Nature which argues that there has been a steady drop in “disruptive” scientific and technological breakthroughs between the years of 1945 and 2010. Basically, it’s a restatement of the concept of a “Great Stagnation” which was proposed by the economist Tyler Cowen in 2011. Though the paper cites everyone from Cowen to Albert Einstein and Isaac Newton, it’s worth noting that it doesn’t cite a single historian of science or technology (unless Alexandre Koyré counts)…
Naturally, as a historian of science and medicine, I think that there really are important things to learn from the history of science and medicine! And what I want to argue for the rest of this post boils down to two specific lessons from that history:
- People living through scientific revolutions are usually unaware of them — and, if they are, they don’t think about them in the same way that later generations do.
- An apparent slowdown in the rate of scientific innovation doesn’t always mean a slowdown in the impacts of science. The history of the first scientific revolution — the one that began in the famously terrible seventeenth century — suggests that the positive impacts of scientific innovation, in particular, are not always felt by the people living throughthe period of innovation. Periods when the pace of innovation appears to slow down may also be eras when society becomes more capable of benefitting from scientific advances by learning how to mitigate previously unforeseen risks.
[… There follows a fascinating look back at the 1660s– the “original” scientific revolution– at Boyle, Newton, at what they hoped/expected, and at how that differed for what their work and that of their colleagues actually yielded. Then the cautionary tale of Thomas Midgley..]
As we appear to be entering a new era of rapid scientific innovation in the 2020s, it is worth remembering that it often takes decades before the lasting social value of a technical innovation is understood — and decades more before we understand its downsides.
In the meantime, I’m pretty psyched about the cancer drugs…
As Thomas Kuhn observed, “The historian of science may be tempted to exclaim that when paradigms change, the world itself changes with them.”
On the difficulty of knowing the outcomes of a scientific revolution from within it: “Experiencing scientific revolutions: the 1660s and the 2020s,” from @ResObscura.
* Max Planck
###
As we try to see, we might spare a thought for William Seward Burroughs; he died on this date in 1898. And inventor who had worked in a bank, he invented the world’s first commercially viable recording adding machine and pioneered of its manufacture. The very successful company that he founded went on to become Unisys, which was instrumental in the development of computing… the implications of which we’re still discovering– and Burroughs surely never saw.
Nor, one reckons, did he imagine that his grandson, William Seward Burroughs II, would become the cultural figure that he did.
“There is often a decades-long time lag between the development of powerful new technologies and their widespread deployment”*…
Jerry Neumann explores the relevance of Carlota Perez‘s thinking (her concept of Techno-Economic Paradigm Shifts and theory of great surges, which built on Schumpeter’s work on Kondratieff waves) to the socio-economic moment in which we find ourselves…
I’ve been in the technology business for more than thirty years and for most of that time it’s felt like constant change. Is this the way innovation progresses, a never-ending stream of new things?
If you look at the history of technological innovation over the course of decades or centuries, not just years, it looks completely different. It looks like innovation comes in waves: great surges of technological development followed by quieter periods of adaptation.
The past 240 years have seen four of these great surges and the first half of a fifth…
Economist Carlota Perez in her 2002 book Technological Revolutions and Financial Capital puts forward a theory that addresses the causes of these successive cycles and tries to explain why each cycle has a similar trajectory of growth and crisis. Her answers lie not just in technological change, but in the social, institutional, and financial aspects of our society itself…
Perez’ theory divides each cycle into two main parts: the installation period and the deployment period. Installation is from irruption to the crisis, and deployment is after the crisis. These are the ying and the yang of the cycle. Some of the differences between the two periods we’ve already mentioned—creative destruction vs. creative construction, financial capital vs. production capital, the battle of the new paradigm with the old vs. acceptance of the new TEP, etc…
We like theory because it tells us why, but more than that, a good theory is predictive. If Perez’ theory is correct, it should allow us to predict what will happen next in the current technological cycle…
A crisp distillation of Perez’s thinking and a provocative consideration of its possible meaning for our times: “The Age of Deployment,” from @ganeumann.
* Carlota Perez (@CarlotaPrzPerez)
###
As we ride the waves, we might recall that it was on this date in 1901, 11 years after the suicide of Vincent Van Gogh (and as his vision and its impact flowered in its “Deployment Age”) a large retrospective of his work (71 paintings) was held at the Bernheim-Jeune Gallery. It captured the excitement of André Derain and Maurice de Vlaminck— and thus contributed to the emergence of Fauvism.

“On the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on our way to using it to create a world of suffering and chaos. Paradoxical, no?”*…
Joseph Weizenbaum, a distinguished professor at MIT, was one of the fathers of artificial intelligence and computing as we know it; he was also one of his earliest critics– one whose concerns remain all too current. After a review of his warnings, Librarian Shipwreck shares a still-relevant set of questions Weizenbaum proposed…
At the end of his essay “Once more—A Computer Revolution” which appeared in the Bulletin of the Atomic Scientists in 1978, Weizenbaum concluded with a set of five questions. As he put it, these were the sorts of questions that “are almost never asked” when it comes to this or that new computer related development. These questions did not lend themselves to simple yes or no answers, but instead called for serious debate and introspection. Thus, in the spirit of that article, let us conclude this piece not with definitive answers, but with more questions for all of us to contemplate. Questions that were “almost never asked” in 1978, and which are still “almost never asked” in 2023. They are as follows:
• Who is the beneficiary of our much-advertised technological progress and who are its victims?
• What limits ought we, the people generally and scientists and engineers particularly, to impose on the application of computation to human affairs?
• What is the impact of the computer, not only on the economies of the world or on the war potential of nations, etc…but on the self-image of human beings and on human dignity?
• What irreversible forces is our worship of high technology, symbolized most starkly by the computer, bringing into play?
• Will our children be able to live with the world we are here and now constructing?
As Weizenbaum put it “much depends on answers to these questions.”
Much still depends on answers to these questions.
Eminently worth reading in full: “‘Computers enable fantasies’ – on the continued relevance of Weizenbaum’s warnings,” from @libshipwreck.
See also: “An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum.”
* Joseph Weizenbaum (1983)
###
As we stay grounded, we might spare a thought for George Stibitz; he died on this date in 1995. A Bell Labs researcher, he was known for his work in the 1930s and 1940s on the realization of Boolean logic digital circuits using electromechanical relays as the switching element– work for which he is internationally recognized as one of the fathers of the modern digital computer.
In 1937, Stibitz, a scientist at Bell Laboratories built a digital machine based on relays, flashlight bulbs, and metal strips cut from tin-cans. He called it the “Model K” because most of it was constructed on his kitchen table. It worked on the principle that if two relays were activated they caused a third relay to become active, where this third relay represented the sum of the operation. Then, in 1940, he gave a demonstration of the first remote operation of a computer.
“Progress means getting nearer to the place you want to be. And if you have taken a wrong turn, then to go forward does not get you any nearer.”*…
Earlier (Roughly) Daily posts have looked at “Progress Studies” and at its relationship to the Rationalism community. Garrison Lovely takes a deeper look at this growing and influential intellectual movement that aims to understand why human progress happens – and how to speed it up…
For most of history, the world improved at a sluggish pace, if at all. Civilisations rose and fell. Fortunes were amassed and squandered. Almost every person in the world lived in what we would now call extreme poverty. For thousands of years, global wealth – at least our best approximations of it – barely budged.
But beginning around 150-200 years ago, everything changed. The world economy suddenly began to grow exponentially. Global life expectancy climbed from less than 30 years to more than 70 years. Literacy, extreme poverty, infant mortality, and even height improved in a similarly dramatic fashion. The story may not be universally positive, nor have the benefits been equally distributed, but by many measures, economic growth and advances in science and technology have changed the way of life for billions of people.
What explains this sudden explosion in relative wealth and technological power? What happens if it slows down, or stagnates? And if so, can we do something about it? These are key questions of “progress studies”, a nascent self-styled academic field and intellectual movement, which aims to dissect the causes of human progress in order to better advance it.
Founded by an influential economist and a billionaire entrepreneur, this community tends to define progress in terms of scientific or technological advancement, and economic growth – and therefore their ideas and beliefs are not without their critics. So, what does the progress studies movement believe, and what do they want to see happen in the future?
Find out at: “Do we need a better understanding of ‘progress’?,” from @GarrisonLovely at @BBC_Future.
Then judge for yourself: was Adorno right? “It would be advisable to think of progress in the crudest, most basic terms: that no one should go hungry anymore, that there should be no more torture, no more Auschwitz. Only then will the idea of progress be free from lies.” Or can–should– we be more purposively, systemically ambitious?
* C. S. Lewis
###
As we get better at getting better, we might recall that it was on this date in 1922 that the United States paid tribute to a man instrumental in the progress that Progress Studies is anxious to sustain, Alexander Graham Bell…
There were more than 14 million telephones in the United States by the time Alexander Graham Bell died. For one minute on August 4, 1922, they were all silent.
The reason: Bell’s funeral. The American inventor was the first to patent telephone technology in the United States and who founded the Bell Telephone System in 1877. Though Bell wasn’t the only person to invent “the transmission of speech by electrical wires,” writes Randy Alfred for Wired, achieving patent primacy in the United States allowed him to spend his life inventing. Even though the telephone changed the world, Bell didn’t stop there.
Bell died on August 2, 1922, just a few days after his 75th birthday. “As a mark of respect every telephone exchange in the United States and Canada closed for a minute when his funeral began around 6:30 p.m. Eastern Standard Time,” Alfred writes.
On the day of the funeral, The New York Times reported that Bell was also honored by advocates for deaf people. “Entirely apart from the monumental achievement of Professor Bell as the inventor of the telephone, his conspicuous work in [sic] behalf of the deaf of this country would alone entitle him to everlasting fame,” said Felix H. Levey, president of the Institution for the Improved Instruction of Deaf Mutes.
In fact, Bell spent much of his income from the telephone on helping deaf people. The same year he founded the Bell Telephone System, 1880, Bell founded the Volta Laboratory. The laboratory, originally called Volta Associates, capitalized on Bell’s work and the work of other sound pioneers. It made money by patenting new innovations for the gramophone and other recorded sound technologies. In 1887, Bell took his share of the money from the sale of gramophone patents and founded the Volta Bureau “as an instrument for the increase and diffusion of knowledge relating to the Deaf,’” writes the National Park Service. Bell and Volta continued to work for deaf rights throughout his life.
Volta Laboratory eventually became Bell Laboratories, which was home to many of the twentieth century’s communication innovations.
Smithsonian










You must be logged in to post a comment.