Archive for February 2025
“Animation is not the art of drawings that move but the art of movements that are drawn”*…

From Open Culture, an appreciation of an animator who, though never a commercial success in his own time, became an inspriation…
At a time when much of animation was consumed with little anthropomorphized animals sporting white gloves, Oskar Fischinger went in a completely different direction. His work is all about dancing geometric shapes and abstract forms spinning around a flat featureless background. Think of a Mondrian or Malevich painting that moves, often in time to the music. Fischinger’s movies have a mesmerizing elegance to them. Check out his 1938 short An Optical Poem above. Circles pop, sway and dart across the screen, all in time to Franz Liszt’s 2nd Hungarian Rhapsody.
This is, of course, well before the days of digital. While it might be relatively simple to manipulate a shape in a computer, Fischinger’s technique was decidedly more low tech. Using bits of paper and fishing line, he individually photographed each frame, somehow doing it all in sync with Liszt’s composition. Think of the hours of mind-numbing work that must have entailed.
Born in 1900 near Frankfurt, Fischinger trained as a musician and an architect before discovering film. In the 1930s, he moved to Berlin and started producing more and more abstract animations that ran before feature films. They proved to be popular too, at least until the National Socialists came to power. The Nazis were some of the most fanatical art critics of the 20th Century, and they hated anything non-representational. The likes of Paul Klee, Oskar Kokoschka and Wassily Kandinsky among others were written off as “degenerate.” (By stark contrast, the CIA reportedly loved Abstract Expressionism, but that’s a different story.) Fischinger fled Germany in 1936 for the sun and glamour of Hollywood.
The problem was that Hollywood was really not ready for Fischinger. Producers saw the obvious talent in his work, and they feared that it was too ahead of its time for broad audiences. “[Fischinger] was going in a completely different direction than any other animator at the time,” said famed graphic designer Chip Kidd in an interview with NPR. “He was really exploring abstract patterns, but with a purpose to them — pioneering what technically is the music video.”
Fischinger’s most widely seen American work was the section in Walt Disney’s Fantasia set to Bach’s Toccata and Fugue in D Minor [see it here]. Disney turned his geometric forms into mountain peaks and violin bows. Fischinger was apoplectic. “The film is not really my work,” Fischinger later reflected. “Rather, it is the most inartistic product of a factory. …One thing I definitely found out: that no true work of art can be made with that procedure used in the Disney studio.” Fischinger didn’t work with Disney again and instead retreated into the art world.
There he found admirers who were receptive to his vision. John Cage, for one, considered the German animator’s experiments to be a major influence on his own work. Cage recalled his first meeting with Fischinger in an interview with Daniel Charles in 1968.
One day I was introduced to Oscar Fischinger who made abstract films quite precisely articulated on pieces of traditional music. When I was introduced to him, he began to talk with me about the spirit, which is inside each of the objects of this world. So, he told me, all we need to do to liberate that spirit is to brush past the object, and to draw forth its sound. That’s the idea which led me to percussion…
Bonus: an excerpt from Fischinger’s cigarette ad from 1934:
An animator ahead of his time: “Optical Poems by Oskar Fischinger: Discover the Avant-Garde Animator Despised by Hitler & Dissed by Disney,” from @openculture.bsky.social.
You can find excerpts of other Fischinger films on Vimeo.
###
As we appreciate art, we might recall that it was on this date in 1940 (16 days after its single-theater premiere) that Walt Disney’s Pinocchio was released. Although it received critical acclaim and became the first animated feature to win a competitive Academy Award– winning two (for Best Music, Original Score and for Best Music, Original Song for “When You Wish Upon a Star”)– it was initially a commercial failure (mainly due to World War II closing off the European and Asian markets). It eventually made a profit after its 1945 rerelease, and is now considered one of the greatest animated films ever made.
Pinocchio was also a major step forward in animation technique, especially in effects animation, an effort led by Joshua Meador.. (In contrast to the character animators who concentrate on the acting of the characters, effects animators create everything that moves other than the characters—vehicles, machinery, and natural effects such as rain, lightning, snow, smoke, shadows and water.)
… the water effects are the true standout in Pinocchio, representing an artistic achievement that would still be difficult to replicate today. To a certain extent, it was nothing more complicated than good old fashioned hard work: Effects animator Sandy Strother [see here] worked on nothing but water effects for a full year. But in addition to working hard, the animators were working smart: In the open-water scenes, for example, the water toward the back of the frame is less detailed and more impressionistic, allowing the artists to focus on making the foreground as rich in detail as possible.
But as detailed as that water is, it isn’t attempting photorealism; as with the character design, the focus is on how the water should function within the story and the emotional response it should provoke, not replicating the real world exactly. Compare the down-to-the-droplet detail of Pinocchio’s open-water scenes to those of Fleischer Studios’ first entry in the feature-animation game, Gulliver’s Travels. Released only a few months before Pinocchio, Gulliver’s Travels used rotoscoping, which had been developed at Fleischer. While the film’s water looks realistic and imposing, it has a flat, almost geometric look that undermines its visual punch. Whereas the way the water works as Monstro chases Geppetto and Pinocchio’s raft is terrifying and overwhelming, and not especially realistic. This is the power of animation, to mold and morph reality to function as something familiar, yet fantastical…
– source
Fischinger, who was primarily engaged down the hall on his ill-fated contribution to Fantasia (released late that same year), contributed to the effects animation of the Blue Fairy’s wand.
“The number 2 is a very dangerous number: that is why the dialectic is a dangerous process”*…
In order to bridge the yawning gulf between the humanities and the sciences, Gordon Gillespie suggests, we must turn to an unexpected field: mathematics…
In 1959, the English writer and physicist C P Snow delivered the esteemed Rede Lecture at the University of Cambridge [a talk now known as “The Two Cultures,” see here]. Regaled with champagne and Marmite sandwiches, the audience had no idea that they were about to be read the riot act. Snow diagnosed a rift of mutual ignorance in the intellectual world of the West. On the one hand were the ‘literary intellectuals’ (of the humanities) and on the other the (natural) ‘scientists’: the much-discussed ‘two cultures’. Snow substantiated his diagnosis with anecdotes of respected literary intellectuals who complained about the illiteracy of the scientists but who themselves had never heard of such a fundamental statement as the second law of thermodynamics. And he told of brilliant scientific minds who might know a lot about the second law but were barely up to the task of reading Charles Dickens, let alone an ‘esoteric, tangled and dubiously rewarding writer … like Rainer Maria Rilke.’
Sixty-plus years after Snow’s diatribe, the rift has hardly narrowed. Off the record, most natural scientists still consider the humanities to be a pseudo-science that lacks elementary epistemic standards. In a 2016 talk, the renowned theoretical physicist Carlo Rovelli lamented ‘the current anti-philosophical ideology’. And he quoted eminent colleagues such as the Nobel laureate Steven Weinberg, Stephen Hawking and Neil deGrasse Tyson, who agreed that ‘philosophy is dead’ and that only the natural sciences could explain how the world works, not ‘what you can deduce from your armchair’. Meanwhile, many humanities scholars see scientists as pedantic surveyors of nature, who may produce practical and useful results, but are blind to the truly deep insights about the workings of the (cultural) world. In his best-selling book The Fate of Rome (2017), Kyle Harper convincingly showed that a changing climate and diseases were major factors contributing to the final fall of the Roman Empire. The majority of Harper’s fellow historians had simply neglected such factors up to then; they had instead focused solely on the cultural, political and socioeconomic ones…
The divide between the two cultures is not just an academic affair. It is, more importantly, about two opposing views on the fundamental connection between mind and nature. According to one view, nature is governed by an all-encompassing system of laws. This image underlies the explanatory paradigm of causal determination by elementary forces. As physics became the leading science in the 19th century, the causal paradigm was more and more seen as the universal form of explanation. Nothing real fell outside its purview. According to this view, every phenomenon can be explained by a more or less complex causal chain (or web), the links of which can, in turn, be traced back, in principle, to basic natural forces. Anything – including any aspect of the human mind – that eludes this explanatory paradigm is simply not part of the real world, just like the ‘omens’ of superstition or the ‘astral projections’ of astrology.
On the opposing view, the human mind – be it that of individuals or collectives – can very well be regarded separately from its physical foundations. Of course, it is conceded that the mind cannot work without the brain, so it is not entirely independent of natural forces and their dynamics. But events of cultural significance can be explained as effects of very different kinds of causes, namely psychological and social, that operate in a sphere quite separate from that of the natural forces.
These divergent understandings underpin the worldviews of each culture. Naive realists – primarily natural scientists – like to point out that nature existed long before humankind. Nature is ordered according to laws that operate regardless of whether or not humans are around to observe. So the natural order of the world must be predetermined independently of the human mind. Conversely, naive idealists – including social constructivists, mostly encountered in the humanities – insist that all order is conceptual order, which is based solely on individual or collective thought. As such, order is not only not independent of the human mind, it’s also ambiguous, just as the human mind is ambiguous in its diverse cultural manifestations.
The clash of cultures between the humanities and the natural sciences is reignited over and over because of two images that portray the interrelationship of mind and nature very differently. To achieve peace between the two cultures, we need to overcome both views. We must recognise that the natural and the mental order of things go hand in hand. Neither can be fully understood without the other. And neither can be traced back to the other…
… The best mediator of a conciliatory view that avoids the mistake of the naive realist and the naive idealist is mathematics. Mathematics gives us shining proof that understanding some aspect of the world does not always come down to uncovering some intricate causal web, not even in principle. Determination is not explanation. And mathematics, rightly understood, demonstrates this in a manner that lets us clearly see the mutual dependency of mind and nature.
For mathematical explanations are structural, not causal. Mathematics lets us understand aspects of the world that are just as real as the Northern Lights or people’s behaviour, but are not effects of any causes. The distinction between causal and structural forms of explanation will become clearer in due course. For a start, take this example. Think of a dying father who wants to pass on his one possession, a herd of 17 goats, evenly to his three sons. He can’t do so. This is not the case because some hidden physical or psychological forces hinder any such action. The reason is simply that 17 is a prime number, so not divisible by three…
… In his ‘two cultures’ speech, Snow located mathematics clearly in the camp of the sciences. But… mathematics doesn’t adhere to the explanatory paradigm of causal determination. This distinguishes it from the natural sciences. Nevertheless, mathematics tells us a lot about nature. According to Kant, it does so because it tells us a lot about the human mind. Mind and nature are inseparable facets of the world we inhabit and conceive. So, why should the humanities not also count as a science? They can tell us just as much about that one world on a fundamental level as the natural sciences. Mathematics demonstrates this clearly…
… Mathematics undermines the causal explanatory paradigm not only in its natural scientific manifestations, but also in its uses in the humanities. We give explanations for a wide variety of phenomena by hidden causes way too often and way too fast, where the simple admission to having no explanation would not only be more honest, but also wiser. Wittgenstein spoke of the disease of wanting to explain. This disease shows itself not just in our private everyday exchanges and in the usual public debates, but also in scholarly discourse of the humanities. When confronted with individual or collective human thinking and behaviour, it is tempting to assume just a few underlying factors responsible for the thinking and behaviour. But, more often than not, there really is no such neat, analysable set of factors. Instead, there is a vast number of natural, psychological and societal factors that are all equally relevant for the emergence of the phenomenon one wants to explain. Perhaps a high-end computer could incorporate all these factors in a grand simulation. But a simulation is not an explanation. A simulation allows us to predict, but it doesn’t let us understand.
The aim of the humanities should not be to identify causes for every phenomenon they investigate. The rise and fall of empires, the economic and social ramifications of significant technological innovations, the cultural impact of great works of art are often products of irreducibly complex, chaotic processes. In such cases, trying to mimic the natural sciences by stipulating some major determining factors is a futile and misleading endeavour.
But mathematics shows that beyond the causal chaos there can be order of a different kind. The central limit theorem lets us see and explain a common regularity in a wide range of causally very different, but equally complex, natural processes. With this and many other examples of structural mathematical explanations of phenomena in the realm of the natural sciences in mind, it seems plausible that mathematical, or mathematically inspired, abstraction can also have fruitful applications in the humanities.
This is by no means meant to promote an uncritical imitation of mathematics in the humanities and social sciences. (The overabundance of simplistic econometric models, for instance, is a huge warning sign.) Rather, it is meant to motivate scholars in these fields to reflect more upon where and when causal explanations make sense. Complexity can’t always be reduced to a graspable causal explanation, or narrative. To the contrary, often the most enlightening enquiries are not those that propose new factors as the true explainers, but those that show by meticulous analysis that far more factors are crucially in play than previously thought. This, in turn, should motivate scholars to seek aspects of their subject of interest beyond causality that are both relevant and amenable to structural forms of explanation. Besides probability theory, chaos theoretical methods and game theory come to mind as mathematical sub-disciplines with potentially fruitful applications in this regard.
However, the main point of our discussion is not that mathematical applications in the humanities might bridge the gap between the natural sciences and the humanities. The point is that mathematics, not really belonging to either camp, shows them to be on an equal footing from the start. The natural scientific paradigm of explanation is not the role model any respectable form of enquiry has to follow. Mathematics shows that natural causes can’t explain every phenomenon, not even every natural phenomenon and not even in principle. So, there is no need for the humanities, the ‘sciences of the mind’, to always strive for explanations by causes that can be ‘reduced’ to more elementary, natural forces. Moreover, mathematics shows that causality, of any kind, is not the only possible basis on which any form of explanation ultimately has to stand. Take for example the semantic relationships between many of our utterances. It is not at all clear that these can be explained in terms of psychological causes, or any other causes. It is not unreasonable to believe that the world is irreducibly structured, in part, by semantic relations, just as it is structured by probabilistic relations…
… The divide between the natural sciences and the humanities does not stem from the supposed fact that only those mental phenomena are real that are explainable in natural-scientific terms. Nor is the divide due to some extra-natural mental order, determined by causal relationships of a very different kind than those studied in the natural sciences. The mental world and the physical world are one and the same world, and the respective sciences deal with different aspects of this one world. Properly understood, insofar as they deal with the same phenomena, they do not provide competing but complementary descriptions of these phenomena.
Mathematics provides the most impressive proof that a true understanding of the world goes beyond the discovery of causal relationships – whether they are constituted by natural or cultural forces. It is worth taking a closer look at this proof. For it outlines the bond that connects mind and nature in particularly bright colours. Kant understood this bond as a ‘transcendental’ one. The late Wittgenstein, on the other hand, demonstrated its anchoring in language – not in the sense of a purely verbal and written practice, but in the sense of a comprehensive practice of actions the mental and bodily elements of which cannot be neatly separated. In the words of Wittgenstein, ‘commanding, questioning, recounting, chatting are as much a part of our natural history as walking, eating, drinking, and playing.’
Mathematics too is part of this practice. As such, like every science, it is inseparably rooted in both nature and the human mind. Unlike the other sciences, this dual rootedness is obvious in the case of mathematics. One only has to see where it resides: beyond causality.
Uniting the “Two Cultures”? “Beyond Causality” in @aeon.co.
* C. P. Snow, The Two Cultures and the Scientific Revolution
###
As we come together, we might send carefully calculated birthday greetings to a man with a foot in each culture: Frank Plumpton Ramsey; he was born on this date in 1903. A philosopher, mathematician, and economist, he made major contributions to all three fields before his death (at the age of 26) on this date in 1930.
While he is probably best remembered as a mathematician and logician and as Wittgenstein’s friend and translator, he wrote three paper in economics: on subjective probability and utility (a response to Keynes, 1926), on optimal taxation (1927, described by Joseph E. Stiglitz as “a landmark in the economics of public finance”), and optimal economic growth (1928; hailed by Keynes as “”one of the most remarkable contributions to mathematical economics ever made”). The economist Paul Samuelson described them in 1970 as “three great legacies – legacies that were for the most part mere by-products of his major interest in the foundations of mathematics and knowledge.”
For more on Ramsey and his thought, see “One of the Great Intellects of His Time,” “The Man Who Thought Too Fast,” and Ramsey’s entry in the Stanford Encyclopedia of Philosophy.
“I tell you, sir, the only safeguard of order and discipline in the modern world is a standardized worker with interchangeable parts.”*…
… a sentiment that grates on the indivisualists among us. Still, there’s no denying the enormous impact that standardization has had. In an excerpt from his book, Exactly: How Precision Engineers Created The Modern World, Simon Winchester on the revolution that came from interchangeable parts…
Lewis Mumford, the historian and philosopher of technology, was one of the earliest to recognize the major role played by the military in the advancement of technology, in the dissemination of precision-based standardization, in the making of innumerable copies of the same and usually deadly thing, all iterations of which must be identical to the tiniest measure, in nanometers or better. The stories that follow, in which standardization and precision-based manufacturing are shown to become crucial ambitions of armies on both sides of the Atlantic, serve both to confirm Mumford’s prescience and to underline the role that the military plays in the evolution of precision. The examples from the early days of the science are of course far from secret; those from today, and that might otherwise be described in full to illustrate today’s very much more precise and precision-obsessed world, are among the most secure and confidential topics of research on the planet — kept in permanent shadow, as the dark side necessarily has to be.
It was in the French capital in 1785 that the idea of producing interchangeable parts for guns was first properly realized, and the precision manufacturing processes that allowed for it were ordered to be first put into operation. Still, it is reasonable to ask why, if the process was dreamed up in 1785, was it not being applied to the American musketry in official use in 1814, twenty-nine years later? Men were running, battles were being lost, great cities were being burned — and in part because the army’s guns were not being made as they should have been made. There is an answer, and it is not a pretty one.
Two little-remembered Frenchmen got the honor of first introducing the system that, had it been implemented in time and implemented properly, would have given America the guns it should have had. The first, the less familiar of the pair, despite the evidently superior nature of his name, was Jean-Baptiste Vaquette de Gribeauval, a wellborn and amply connected figure who specialized in designing cannons for the French artillery. He supposedly came up with a scheme, in 1776, for boring out cannons using almost exactly the same technique that John Wilkinson had invented in England, that of moving a rotating drill into a solid cannon-size and cannon-shaped slug of iron. Wilkinson had patented his precisely similar system two years earlier, in 1774, but nonetheless, the French system, the système Gribeauval, as it came to be known for the next three decades, long dominated French artillery making. It gave the French armies access to a range of highly efficient and lightweight, but manifestly not entirely originally conceived, field pieces. (Gribeauval did employ what were called go and no-go gauges as a means of ensuring that cannonballs fitted properly inside his cannons, but this was hardly revolutionary engineering, and it had been around in principle for five centuries.)
The second figure, the man who did the most to bring the system of interchangeable parts to the making of guns, and whose technique was, unlike Gribeauval’s, unchallengeable, was Honoré Blanc. He was not a soldier but a gunsmith, and during his apprenticeship he became well aware of the Gribeauval system. He decided early in his career that he could bring a similar standardization to the flintlock musket, for the benefit of the man on the battlefield.
Yet there was a difference. A cannon was big and heavy and crude — a gunner simply touched his linstock, with its attached lighted match, to the vent, and the cannon fired — and so such parts as there were proved easily amenable to standardization. With the flintlock, however, the lock (that part of a musket that delivered the spark that exploded the priming powder that ignited the main charge and drove the ball down the barrel) was a fairly delicate and complex piece of engineering, made of many oddly shaped parts and liable to all kinds of failure. To the uninitiated, the names of the bits and pieces of a flintlock alone are bewildering: a lock has parts that are variously known as the bridle, the sear, the frizzen, the pan, and any number of springs and screws and bolts and plates as well as, of course, the spark-producing (when struck by the aforementioned metal frizzen) piece of flint. To render the lock into a standard piece of military equipment, with all its parts made exactly the same for each lock, was going to be a tall order.
Cost, rather than the well-being of the infantryman or the conduct of the battle, was the prime motive. The French government declared in the mid-1780s that the country’s gunsmiths were charging too much for their craftsmanship, and demanded they improve their manufacturing process or lower their prices. The smiths not unnaturally balked at the impertinence of the suggestion, and promptly tried selling their products to the new armories and gun makers across the Atlantic in America, a move that alarmed the French government, as it imagined it might well run out of weaponry as a result.
It was at this point that Honore Blanc entered the picture, taking a civilian job as the army’s quality-control inspector. His brother gunsmiths expressed their dismay over the fact that one of their number was going over to the other side, was a poacher turning gamekeeper. Blanc dismissed the criticism and got on with his job, his own motivation being the welfare of the soldier out in the field rather than allowing the government to cut costs. He was greatly influenced by M. de Gribeauval, and decided he could ape his system of standardization, ensuring that all the component parts of a flintlock he made as exact and faithful copies of one perfectly made master.
This master he made himself, carefully and with great precision, and with all the specifications laid down as precisely as possible (using the arcane system of the Ancien Régime, which still employed dimensional measures such as the pointe, the ligne, and the pouce) to tolerances of about what today we would recognize as 0.02 millimeters. He then made a series of jigs and gauges to ensure that all the locks made subsequently were faithful to this first perfect master, by the judicious use of files and such lathes as were available. The gunsmiths hired by Blanc to perform this task — by hand, still — made each lock exactly as the original. Providing that they did so, exactly, all the pieces would then fit perfectly together, and the whole assembled lock would fit equally perfectly into each completed weapon.
Yet only a small number of gunsmiths were willing to work under these stringent new conditions. Most balked. Making guns simply by copying parts reduced the value of the gunsmith’s craftsmanship to near insignificance, they argued. Unskilled drones could do their work instead. By arguing this, the French smiths were voicing much the same complaints as the Luddites had grumbled over in England: that precision was stripping their skills of worth. This argument would be heard many times in the future as the steady march of precision engineering advanced across Europe, the Americas, the world. The kind of mutinous sentiments heard in the English Midlands half a century before were now being muttered in northern France, as precision started to become an international phenomenon, its consequences rippling into the beyond.
Such was the hostility in France to Honoré Blanc, in fact, that the government had to offer him protection, and so sequestered him and his small but faithful crew of precision gun makers in the basement dungeons of the great Château de Vincennes, east of Paris. At the time, the great structure (much of it still standing, and much visited) was in use as a prison: Diderot had been incarcerated there, and the Marquis de Sade. In the relative peace of what would, within thirty years, become one of postrevolutionary France’s greatest arsenals, Blanc and his team worked away producing his locks, all of them supposedly identical. Blanc made all the necessary tools and jigs to help in his efforts — according to one source, hardening the metal pieces by burying them for weeks in the copious leavings of manure from the castle stables.
By July of 1785, Blanc was ready to offer a demonstration. He sent out invitations to the capital’s nabobs and military flag officers and to his still-hostile colleague gunsmiths, to show them what he had achieved. Many officials came, but few of the smiths, who were still seething. Yet one person of great future significance did present himself at the donjon’s fortified gates: the minister to France of the United States of America, Thomas Jefferson…
On the making of the modern world: interchangeable parts, from @simonwwriter, via the invaluable @delanceyplace.
###
As we mix and match, we might spare a thought for another contibutor to our modern age, Jethro Tull; he died on this date in 1741. An agronomist who promoted planting seeds in rows (as opposed to “broadcast,” simply casting the seeds around), he perfected a horse-drawn seed drill in 1701 that economically sowed the seeds in three neat rows; because of its internal moving parts (including a rotary mechanism that became part of all sowing devices that followed), it has been called the first agricultural machinery. He later developed a horse-drawn hoe, a four-coultered plow that made vertical cuts in the soil before the plowshare.
Tull’s methods– horse-hoeing and row seeding, effectively a rejection of traditional Virgilian husbandry– were initailly controversial, but were steadily adopted by many landowners and helped to provide the basis for modern agriculture.
“Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital.”*…
Former Comptroller of the Currency Eugene Ludwig argues that, at least insofar as many (maybe most) Americans are concerned, unemployment is higher, wages are lower, and growth is less robust than government statistics suggest…
Before the presidential election, many Democrats were puzzled by the seeming disconnect between “economic reality” as reflected in various government statistics and the public’s perceptions of the economy on the ground. Many in Washington bristled at the public’s failure to register how strong the economy really was. They charged that right-wing echo chambers were conning voters into believing entirely preposterous narratives about America’s decline.
What they rarely considered was whether something else might be responsible for the disconnect — whether, for instance, government statistics were fundamentally flawed. What if the numbers supporting the case for broad-based prosperity were themselves misrepresentations? What if, in fact, darker assessments of the economy were more authentically tethered to reality?
On some level, I relate to the underlying frustrations. Having served as comptroller of the currency during the 1990s, I‘ve spent substantial chunks of my career exploring the gaps between public perception and economic reality, particularly in the realm of finance. Many of the officials I’ve befriended and advised over the last quarter-century — members of the Federal Reserve, those running regulatory agencies, many leaders in Congress — have told me they consider it their responsibility to set public opinion aside and deal with the economy as it exists by the hard numbers. For them, government statistics are thought to be as reliable as solid facts.
In recent years, however, as my focus has broadened beyond finance to the economy as a whole, the disconnect between “hard” government numbers and popular perception has spurred me to question that faith. I’ve had the benefit of living in two realms that seem rarely to intersect — one as a Washington insider, the other as an adviser to lenders and investors across the country. Toggling between the two has led me to be increasingly skeptical that the government’s measurements properly capture the realities defining unemployment, wage growth and the strength of the economy as a whole.
These numbers have time and again suggested to many in Washington that unemployment is low, that wages are growing for middle America and that, to a greater or lesser degree, economic growth is lifting all boats year upon year. But when traveling the country, I’ve encountered something very different…
… Within the nation’s capital, this gap in perception has had profound implications. For decades, a small cohort of federal agencies have reported many of the same economic statistics, using fundamentally the same methodology or relying on the same sources, at the same appointed times. Rarely has anyone ever asked whether the figures they release hew to reality. Given my newfound skepticism, I decided several years ago to gather a team of researchers under the rubric of the Ludwig Institute for Shared Economic Prosperity to delve deeply into some of the most frequently cited headline statistics.
What we uncovered shocked us. The bottom line is that, for 20 years or more, including the months prior to the election, voter perception was more reflective of reality than the incumbent statistics. Our research revealed that the data collected by the various agencies is largely accurate. Moreover, the people staffing those agencies are talented and well-intentioned. But the filters used to compute the headline statistics are flawed. As a result, they paint a much rosier picture of reality than bears out on the ground.
Take, as a particularly egregious example, what is perhaps the most widely reported economic indicator: unemployment. Known to experts as the U-3, the number misleads in several ways. First, it counts as employed the millions of people who are unwillingly under-employed — that is, people who, for example, work only a few hours each week while searching for a full-time job. Second, it does not take into account many Americans who have been so discouraged that they are no longer trying to get a job. Finally, the prevailing statistic does not account for the meagerness of any individual’s income. Thus you could be homeless on the streets, making an intermittent income and functionally incapable of keeping your family fed, and the government would still count you as “employed.”
I don’t believe those who went into this past election taking pride in the unemployment numbers understood that the near-record low unemployment figures — the figure was a mere 4.2 percent in November — counted homeless people doing occasional work as “employed.” But the implications are powerful. If you filter the statistic to include as unemployed people who can’t find anything but part-time work or who make a poverty wage (roughly $25,000), the percentage is actually 23.7 percent. In other words, nearly one of every four workers is functionally unemployed in America today — hardly something to celebrate…
[Ludwig similarly analyzes data on wages, inflation, and GDP, finding them similarlly flawed…]
… Take all of these statistical discrepancies together. What we have here is a collection of economic indicators that all point in the same misleading direction. They all shroud the reality faced by middle- and lower-income households. The problem isn’t that some Americans didn’t come out ahead after four years of Bidenomics. Some did. It’s that, for the most part, those living in more modest circumstances have endured at least 20 years of setbacks, and the last four years did not turn things around enough for the lower 60 percent of American income earners.
To be fair, the prevailing indicators aren’t without merit. It is, for example, useful to know how the wages of full-time employees have evolved. The challenge, quite separate from any quibbling with the talented people working to tell the nation’s economic story, is to provide policymakers with a full picture of the reality faced by the bulk of the population. What we need is to find new ways to provide a more realistic picture of the nation’s underlying economic conditions on a monthly basis. The indicators my colleagues and I have constructed could serve as the basis for or inspiration for government-sponsored alternatives. Regardless, something needs to change.
This should not be a partisan issue — policymakers in both parties would benefit from gleaning a more accurate sense of what’s happening at the ground level of the American economy. In reality, both Democrats and Republicans were vulnerable to being snowed in the 2024 cycle — it just happened that the dissatisfaction during this particular cycle undermined the incumbent party.
In an age where faith in institutions of all sorts is in free fall, Americans are perpetually told, per a classic quote from former Sen. Daniel Patrick Moynihan, that while we may be entitled to our own opinions, we aren’t entitled to our own facts. That should be right, at least in the realm of economics. But the reality is that, if the prevailing indicators remain misleading, the facts don’t apply. We have it in our grasp to cut through the mirage that led Democrats astray in 2024. The question now is whether we will correct course…
On the need to revise our economic reference statistics: “Voters Were Right About the Economy. The Data Was Wrong.” from @LISEP_org in @POLITICOMag. Eminently worth reading in full.
More on (and more-current readings of) the suggested “revised metrics” at the Ludwig Institute for Shared Economic Prosperity.
###
As we muse on measurement and meaning, we might recall that it was on this date in 1979 that The Cars released “Good Times Roll,” the third single from their eponymously-titled debut album.
“Nature doesn’t feel compelled to stick to a mathematically precise algorithm; in fact, nature probably can’t stick to an algorithm.”*…
Just over 30 years ago, my GBN partner Stewart Brand and I were discussing the then-new web affordance Pointcast, an active screensaver that displayed news and other information tailored to a user’s expressed interests and delivered live over the Internet. It was big news at the time; and while it failed, it prefigured the emergence of the algorithms that today feed “preferences” that we don’t even need (nor for that matter have the opportunity) to articlulate.
The problem, we mused, is that a system like that becomes a trap, one that (by simply satisfying expressed desires) impicitly works against discovery of the altogether new, of the thing we didn’t yet know might interest (or benefit) us. A system like that pulls us more deeply into holes instead of helping us explore broader horizons– it is biased against discovery, against learning (in its broadest sense). Our most important discoveries are often the books somewhere on the library shelp near the one we were seeking, the article in the (old print) newpaper next to the one to which we were intially drawn.
The answer, we imagined, wasn’t to skip such systems altogether; they can play a useful role; rather, it was to introduce a complementary “dial-up randomness”– to create ways to feed ourselves a stream of surprises.
Benj Edwards reports on just such an affordance…
[Recently] a New York-based app developer named Isaac Gemal [here] debuted a new site called WikiTok, where users can vertically swipe through an endless stream of Wikipedia article stubs in a manner similar to the interface for video-sharing app TikTok.
It’s a neat way to stumble upon interesting information randomly, learn new things, and spend spare moments of boredom without reaching for an algorithmically addictive social media app. Although to be fair, WikiTok is addictive in its own way, but without an invasive algorithm tracking you and pushing you toward the lowest-common-denominator content. It’s also thrilling because you never know what’s going to pop up next.
WikiTok, which works through mobile and desktop browsers, feeds visitors a random list of Wikipedia articles—culled from the Wikipedia API—into a vertically scrolling interface. Despite the name that hearkens to TikTok, there are currently no videos involved. Each entry is accompanied by an image pulled from the corresponding article. If you see something you like, you can tap “Read More,” and the full Wikipedia page on the topic will open in your browser.
For now, the feed is truly random, and Gemal is currently resisting calls to automatically tailor the stream of articles to the user’s interests based on what they express interest in.
“I have had plenty of people message me and even make issues on my GitHub asking for some insane crazy WikiTok algorithm,” Gemal told Ars. “And I had to put my foot down and say something along the lines that we’re already ruled by ruthless, opaque algorithms in our everyday life; why can’t we just have one little corner in the world without them?”
The breadth of topics you’ll encounter on WikiTok is staggering, owing to the wide range of knowledge that Wikipedia covers…
… Gemal posted the code for WikiTok on GitHub, so anyone can modify or contribute to the project. Right now, the web app supports 14 languages, article previews, and article sharing on both desktop and mobile browsers. New features may arrive as contributors add them. It’s based on a tech stack that includes React 18, TypeScript, Tailwind CSS, and Vite.
And so far, he is sticking to his vision of a free way to enjoy Wikipedia without being tracked and targeted. “I have no grand plans for some sort of insane monetized hyper-calculating TikTok algorithm,” Gemal told us. “It is anti-algorithmic, if anything.”
WikiTok cures boredom in spare moments with wholesome swipe-ups: “Developer creates endless Wikipedia feed to fight algorithm addiction,” @benjedwards.com in @arstechnica.com.
###
As we supersize serendipity, we might recall that it was on this date in 1967 that a remarkably warm and open new neighbor moved into the neighborhood: Misteroger’s Neighborhood premeired nationally on public television stations.
Fred McFeely Rogers was born in Latrobe, Pennsylvania on March 20, 1928. After earning his bachelor’s degree in music from Rollins College in 1951, he began working for NBC for a short time in New York. In 1953, he began working at the new public television station WQED for the show, The Children’s Corner where he learned that wearing sneakers were a lot quieter on the set than his dress shoes.
In 1961, Rogers moved to Toronto, Ontario to work on a new 15-minute show called Misterogers for CBC Television. In 1966, Rogers went back to WQED to create Misteroger’s Neighborhood.
In 1970, the show was renamed Mister Rogers’ Neighborhood. The series ended again in 1976 but was picked up three years later when Rogers felt as if his work speaking to children wasn’t done. The show continued from 1979 through 2001. Mr. Rogers passed away on February 27, 2003.
In 2011, PBS created an animated “spinoff” of the show called Daniel Tiger’s Neighborhood featuring the characters Rogers had created in his “land of make-believe”; and in 2019, Tom Hanks portrayed Rogers in the film, A Beautiful Day in the Neighborhood,” a role that earned him an Oscar nomination.









You must be logged in to post a comment.