(Roughly) Daily

Posts Tagged ‘progress

“Progress means getting nearer to the place you want to be. And if you have taken a wrong turn, then to go forward does not get you any nearer.”*…

Earlier (Roughly) Daily posts have looked at “Progress Studies” and at its relationship to the Rationalism community. Garrison Lovely takes a deeper look at this growing and influential intellectual movement that aims to understand why human progress happens – and how to speed it up…

For most of history, the world improved at a sluggish pace, if at all. Civilisations rose and fell. Fortunes were amassed and squandered. Almost every person in the world lived in what we would now call extreme poverty. For thousands of years, global wealth – at least our best approximations of it – barely budged.

But beginning around 150-200 years ago, everything changed. The world economy suddenly began to grow exponentially. Global life expectancy climbed from less than 30 years to more than 70 years. Literacy, extreme poverty, infant mortality, and even height improved in a similarly dramatic fashion. The story may not be universally positive, nor have the benefits been equally distributed, but by many measures, economic growth and advances in science and technology have changed the way of life for billions of people.

What explains this sudden explosion in relative wealth and technological power? What happens if it slows down, or stagnates? And if so, can we do something about it? These are key questions of “progress studies”, a nascent self-styled academic field and intellectual movement, which aims to dissect the causes of human progress in order to better advance it.

Founded by an influential economist and a billionaire entrepreneur, this community tends to define progress in terms of scientific or technological advancement, and economic growth – and therefore their ideas and beliefs are not without their critics. So, what does the progress studies movement believe, and what do they want to see happen in the future?

Find out at: “Do we need a better understanding of ‘progress’?,” from @GarrisonLovely at @BBC_Future.

Then judge for yourself: was Adorno right? “It would be advisable to think of progress in the crudest, most basic terms: that no one should go hungry anymore, that there should be no more torture, no more Auschwitz. Only then will the idea of progress be free from lies.” Or can–should– we be more purposively, systemically ambitious?

* C. S. Lewis

###

As we get better at getting better, we might recall that it was on this date in 1922 that the United States paid tribute to a man instrumental in the progress that Progress Studies is anxious to sustain, Alexander Graham Bell…

There were more than 14 million telephones in the United States by the time Alexander Graham Bell died. For one minute on August 4, 1922, they were all silent.

The reason: Bell’s funeral. The American inventor was the first to patent telephone technology in the United States and who founded the Bell Telephone System in 1877. Though Bell wasn’t the only person to invent “the transmission of speech by electrical wires,” writes Randy Alfred for Wired, achieving patent primacy in the United States allowed him to spend his life inventing. Even though the telephone changed the world, Bell didn’t stop there.

Bell died on August 2, 1922, just a few days after his 75th birthday. “As a mark of respect every telephone exchange in the United States and Canada closed for a minute when his funeral began around 6:30 p.m. Eastern Standard Time,” Alfred writes.

On the day of the funeral, The New York Times reported that Bell was also honored by advocates for deaf people. “Entirely apart from the monumental achievement of Professor Bell as the inventor of the telephone, his conspicuous work in [sic] behalf of the deaf of this country would alone entitle him to everlasting fame,” said Felix H. Levey, president of the Institution for the Improved Instruction of Deaf Mutes.

In fact, Bell spent much of his income from the telephone on helping deaf people. The same year he founded the Bell Telephone System, 1880, Bell founded the Volta Laboratory. The laboratory, originally called Volta Associates, capitalized on Bell’s work and the work of other sound pioneers. It made money by patenting new innovations for the gramophone and other recorded sound technologies. In 1887, Bell took his share of the money from the sale of gramophone patents and founded the Volta Bureau “as an instrument for the increase and diffusion of knowledge relating to the Deaf,’” writes the National Park Service. Bell and Volta continued to work for deaf rights throughout his life.

Volta Laboratory eventually became Bell Laboratories, which was home to many of the twentieth century’s communication innovations.

Smithsonian

source

“Monetary policy is one of the most difficult topics in economics. But also, I believe, a topic of absolutely crucial importance for our prosperity.”*…

Basement of a bank full of banknotes at the time of the Mark devaluation during the economic crisis in the Weimar Republic

What can we learn from a twentieth century economist who was a critic of Keynes and a staunch advocate of the Gold Standard? Samuel Gregg considers the career of Jacques Rueff

Money, it is often said, makes the world go round. The inverse of that axiom is that monetary disorder brings chaos in its wake. As we learned from the hyperinflation that wreaked havoc in 1920s Germany and the stagflation which hobbled Western economies throughout the 1970s, the effects of such disorder go far beyond the economy. Further complicating the problem is that restoring monetary stability is invariably a painful exercise, often bringing unemployment, recession and lasting social damage in its wake.

As a rule, monetary theory and monetary policy are dry affairs, dominated by highly technical discussions concerning topics such as the nature of capital or the likely impact of interest-rates set by central banks. One thinker who did not conform to this mould was the French monetary theorist Jacques Rueff (1896-1978). Arguably France’s most important twentieth-century economist, Rueff played a major role in shaping the Third Republic’s response to the Great Depression in the 1930s, designed the market liberalisation programme that saved France from economic collapse in 1958, and emerged in the 1960s as the leading critic of the US dollar’s role in the global economy and a prominent advocate of a return to the classic gold standard.

Rueff was, however, much more than an economist. A graduate of the École Polytechnique, he was among that small elite of civil servants trained in administration, engineering, mathematics, the natural sciences, foreign languages, and political economy whose role was to inject stability into the perpetual political pandemonium of the Third Republic. But even among that highly-educated cohort, Rueff stood out for the breadth and depth of his knowledge and his willingness to integrate it into his economic reflections. For Rueff, the significance of monetary order went beyond issues such as economic growth or employment, as important as they were. Ultimately, it was about whether Western civilisation flourished or embraced self-delusion…

Gregg recounts Rueff’s career, his championing of “real rights” (e.g., property rights) vs. “false rights” (which involve the state declaring something such as unemployment benefits to be a right and then trying to realize it through means that destroy real rights), and his advocacy of a return to the Gold Standard (part of his critique of the use of the U.S. dollar as a unit of reserve)… all positions with which reasonable people (including your correspondent) might disagree. But Gregg reminds us that Rueff’s most fundamental goal– a healthy society– surely remains desirable, and that his fear of the chaos that monetary meltdowns can cause is only too justified…

Monetary order wasn’t everything for Rueff. His writings reflect deep awareness of the ways in which culture, religion, philosophy, music and literature influenced civilisational development. Nonetheless Rueff insisted the threats posed by monetary disorder were more than economic. For him, civilisational growth was impossible without monetary order…

Let us not allow means with which we disagree to obscure important ends.

After examining the economic chaos of the early twentieth century, monetary theorist Jacques Rueff argued that without monetary order, civilizational growth is impossible: “Jacques Rueff’s quest for monetary order,” from @DrSamuelGregg in @EngelsbergIdeas.

* Maxime Bernier

###

As we remember that neither should we allow ends with which we disagree to obscure important means, we might spare a thought for Leonid Kantorovich; he died on this date in 1986. An economist and mathematician best known for his theory and development of techniques for the optimal allocation of resources, he is regarded as the founder of linear programming— for which he received the Nobel Memorial Prize in Economic Sciences in 1975.

source

“Alchemy. The link between the immemorial magic arts and modern science. Humankind’s first systematic effort to unlock the secrets of matter by reproducible experiment.”*…

Science has entered a new era of alchemy, suggests Robbert Dijkgraaf, Director of the Institute for Advanced Study at Princeton– and, he argues, that’s a good thing…

Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives — from internet searches to social media feeds — the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing?

According to the prominent AI researcher Ali Rahimi and others, today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.

It’s true that we have little fundamental understanding of the inner workings of self-learning algorithms, or of the limits of their applications. These new forms of AI are very different from traditional computer codes that can be understood line by line. Instead, they operate within a black box, seemingly unknowable to humans and even to the machines themselves.

This discussion within the AI community has consequences for all the sciences. With deep learning impacting so many branches of current research — from drug discovery to the design of smart materials to the analysis of particle collisions — science itself may be at risk of being swallowed by a conceptual black box. It would be hard to have a computer program teach chemistry or physics classes. By deferring so much to machines, are we discarding the scientific method that has proved so successful, and reverting to the dark practices of alchemy?

Not so fast, says Yann LeCun, co-recipient of the 2018 Turing Award for his pioneering work on neural networks. He argues that the current state of AI research is nothing new in the history of science. It is just a necessary adolescent phase that many fields have experienced, characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach. It’s simply that we’re more familiar with its opposite.

After all, it’s easy to imagine knowledge flowing downstream, from the source of an abstract idea, through the twists and turns of experimentation, to a broad delta of practical applications. This is the famous “usefulness of useless knowledge,” advanced by Abraham Flexner in his seminal 1939 essay (itself a play on the very American concept of “useful knowledge” that emerged during the Enlightenment).

A canonical illustration of this flow is Albert Einstein’s general theory of relativity. It all began with the fundamental idea that the laws of physics should hold for all observers, independent of their movements. He then translated this concept into the mathematical language of curved space-time and applied it to the force of gravity and the evolution of the cosmos. Without Einstein’s theory, the GPS in our smartphones would drift off course by about 7 miles a day.

But maybe this paradigm of the usefulness of useless knowledge is what the Danish physicist Niels Bohr liked to call a “great truth” — a truth whose opposite is also a great truth. Maybe, as AI is demonstrating, knowledge can also flow uphill.

In the broad history of science, as LeCun suggested, we can spot many examples of this effect, which can perhaps be dubbed “the uselessness of useful knowledge.” An overarching and fundamentally important idea can emerge from a long series of step-by-step improvements and playful experimentation — say, from Fröbel to Nobel.

Perhaps the best illustration is the discovery of the laws of thermodynamics, a cornerstone of all branches of science. These elegant equations, describing the conservation of energy and increase of entropy, are laws of nature, obeyed by all physical phenomena. But these universal concepts only became apparent after a long, confusing period of experimentation, starting with the construction of the first steam engines in the 18th century and the gradual improvement of their design. Out of the thick mist of practical considerations, mathematical laws slowly emerged…

One could even argue that science itself has followed this uphill path. Until the birth of the methods and practices of modern research in the 17th century, scientific research consisted mostly of nonsystematic experimentation and theorizing. Long considered academic dead ends, these ancient practices have been reappraised in recent years: Alchemy is now considered to have been a useful and perhaps even necessary precursor to modern chemistry — more proto-science than hocus-pocus.

The appreciation of tinkering as a fruitful path toward grand theories and insights is particularly relevant for current research that combines advanced engineering and basic science in novel ways. Driven by breakthrough technologies, nanophysicists are tinkering away, building the modern equivalents of steam engines on the molecular level, manipulating individual atoms, electrons and photons. Genetic editing tools such as CRISPR allow us to cut and paste the code of life itself. With structures of unimaginable complexity, we are pushing nature into new corners of reality. With so many opportunities to explore new configurations of matter and information, we could enter a golden age of modern-day alchemy, in the best sense of the word.

However, we should never forget the hard-won cautionary lessons of history. Alchemy was not only a proto-science, but also a “hyper-science” that overpromised and underdelivered. Astrological predictions were taken so seriously that life had to adapt to theory, instead of the other way around. Unfortunately, modern society is not free from such magical thinking, putting too much confidence in omnipotent algorithms, without critically questioning their logical or ethical basis.

Science has always followed a natural rhythm of alternating phases of expansion and concentration. Times of unstructured exploration were followed by periods of consolidation, grounding new knowledge in fundamental concepts. We can only hope that the current period of creative tinkering in artificial intelligence, quantum devices and genetic editing, with its cornucopia of useful applications, will eventually lead to a deeper understanding of the world…

Today’s powerful but little-understood artificial intelligence breakthroughs echo past examples of unexpected scientific progress: “The Uselessness of Useful Knowledge,” from @RHDijkgraaf at @the_IAS.

Pair with: “Neuroscience’s Existential Crisis- we’re mapping the brain in amazing detail—but our brain can’t understand the picture” for a less optimistic view.

*  John Ciardi

###

As we experiment, we might recall that it was on this date in 1993 that the Roman Catholic Church admitted that it had erred in condemning Galileo.  For over 359 years, the Church had excoriated Galileo’s contentions (e.g., that the Earth revolves around the Sun) as anti-scriptural heresy.  In 1633, at age 69, Galileo had been forced by the Roman Inquisition to repent, and spent the last eight years of his life under house arrest.  After 13 years of inquiry, Pope John Paul II’s commission of historic, scientific and theological scholars brought the pontiff a “not guilty” finding for Galileo; the Pope himself met with the Pontifical Academy of Sciences to help correct the record.

Galileo (standing; white collar, dark smock) showing the Doge of Venice (seated) how to use the telescope. From a fresco by Giuseppe Bertini

source

“When the graphs were finished, the relations were obvious at once”*…

We can only understand what we can “see”…

… this long-forgotten, hand-drawn infographic from the 1840s… known as a “life table,” was created by William Farr, a doctor and statistician who, for most of the Victorian era, oversaw the collection of public health statistics in England and Wales… it’s a triptych documenting the death rates by age in three key population groups: metropolitan London, industrial Liverpool, and rural Surrey.

With these visualizations, Farr was making a definitive contribution to an urgent debate from the period: were these new industrial cities causing people to die at a higher rate? In some ways, with hindsight, you can think of this as one of the most crucial questions for the entire world at that moment. The Victorians didn’t realize it at the time, but the globe was about to go from less than five percent of its population living in cities to more than fifty percent in just about a century and a half. If these new cities were going to be killing machines, we probably needed to figure that out.

It’s hard to imagine just how confusing it was to live through the transition to industrial urbanism as it was happening for the first time. Nobody really had a full handle on the magnitude of the shift and its vast unintended consequences. This was particularly true of public health. There was an intuitive feeling that people were dying at higher rates than they had in the countryside, but it was very hard even for the experts to determine the magnitude of the threat. Everyone was living under the spell of anecdote and availability bias. Seeing the situation from the birds-eye view of public health data was almost impossible…

The images Farr created told a terrifying and unequivocal story: density kills. In Surrey, the increase of mortality after birth is a gentle slope upward, a dune rising out of the waterline. The spike in Liverpool, by comparison, looks more like the cliffs of Dover. That steep ascent condensed thousands of individual tragedies into one vivid and scandalous image: in industrial Liverpool, more than half of all children born were dead before their fifteenth birthday.

The mean age of death was just as shocking: the countryfolk were enjoying life expectancies close to fifty, likely making them some of the longest-lived people on the planet in 1840. The national average was forty-one. London was thirty-five. But Liverpool—a city that had undergone staggering explosions in population density, thanks to industrialization—was the true shocker. The average Liverpudlian died at the age of twenty-five, one of the lowest life expectancies ever recorded in that large a human population.

There’s a natural inclination to think about innovation in human health as a procession of material objects: vaccines, antibiotics, pacemakers. But Farr’s life tables are a reminder that new ways of perceiving the problems we face, new ways of seeing the underlying data, are the foundations on which we build those other, more tangible interventions. Today cities reliably see life expectancies higher than rural areas—a development that would have seemed miraculous to William Farr, tabulating the data in the early 1840s. In a real sense, Farr laid the groundwork for that historic reversal: you couldn’t start to tackle the problem of how to make industrial cities safer until you had first determined that the threat was real.

Why the most important health innovations sometimes come from new ways of seeing: “The Obscure Hand-Drawn Infographic That Changed The Way We Think About Cities,” from Steven Johnson (@stevenbjohnson). More in his book, Extra Life, and in episode 3 of the PBS series based on it.

* J. C. R. Licklider

###

As we investigate infographics, we might send carefully calculated birthday greetings to Lewis Fry Richardson; he was born on this date in 1881.  A mathematician, physicist, and psychologist, he is best remembered for pioneering the modern mathematical techniques of weather forecasting.  Richardson’s interest in weather led him to propose a scheme for forecasting using differential equations, the method used today, though when he published Weather Prediction by Numerical Process in 1922, suitably fast computing was unavailable.  Indeed, his proof-of-concept– a retrospective “forecast” of the weather on May 20, 1910– took three months to complete by hand. (in fairness, Richardson did the analysis in his free time while serving as an ambulance driver in World War I.)  With the advent of modern computing in the 1950’s, his ideas took hold.  Still the ENIAC (the first real modern computer) took 24 hours to compute a daily forecast.  But as computing got speedier, forecasting became more practical.

Richardson also yoked his forecasting techniques to his pacifist principles, developing a method of “predicting” war.  He is considered (with folks like Quincy Wright and Kenneth Boulding) a father of the scientific analysis of conflict.

And Richardson helped lay the foundations for other fields and innovations:  his work on coastlines and borders was influential on Mandelbrot’s development of fractal geometry; and his method for the detection of icebergs anticipated the development of sonar.

 source

“Our situation is unique in the annals of life, yet inscribed for all time in the logic of history”*…

 

evolution

 

A scan of the history of gross world product (GWP) at multi-millennium time scale generates fundamental questions about the human past and prospect. What is the probability distribution for negative shocks ranging from mild recessions to the pandemics? Were the agricultural and industrial revolutions one-offs or did they manifest dynamics still ongoing? Is the pattern of growth best seen as exponential, if with occasional step changes in the rate, or as super exponential? If the latter, how do we interpret the typical corollary,that output will become infinite in finite time? In a modest step toward answering such ambitious questions, this paper introduces the first internally consistent statistical model of world economic history…

[Looking back to 10,000 BCE, the author concludes that] the world economic system over the long term tends not to the steady growth seen in industrial countries in the last century or so, but to instability. The credible range of future paths seems wide.

Oh, so very wide… David Roodman (@davidroodman) goes big: “Modeling the Human Trajectory” (pdf).

(Image above: source)

* François Meyer, 1974. La surchauffe de la croissance: Essai sur la dynamique de l’évolution

###

As we take the long view, we might recall that it is on this date each year that the roughly 300 residents of a small village participate in a drawing that determines who will be sacrificed to insure a good harvest… in Shirley Jackson’s short story, “The Lottery.”

Originally published in the June 26, 1948, issue of The New Yorker, it evoked strong initial negative response; subscriptions were cancelled; much hate mail received throughout the summer; and the Union of South Africa banned the story.  It is now considered a classic of short fiction (and among the most famous American short stories); it spawned several radio, television, and film adaptations, and inspired voluminous analysis, both literary and sociological.

lottery source

 

 

Written by (Roughly) Daily

June 27, 2020 at 1:01 am

%d bloggers like this: