Posts Tagged ‘employment’
“Technological change is not additive; it is ecological. A new technology does not merely add something; it changes everything”*…
Insofar as (at the risk of sounding tautological) transformative technologies are concerned, Neil Postman is surely right. But then, as Roy Amara pointed out, “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” David Oks uses a common myth of technological replacement to illustrate– and more specifically, to observe that there’s a lot more to replacing labor than just automating tasks.
He begins by recounting an interview a few months ago of J. D. Vance by Ross Douthat in which (in response to a question from Douthat about the potential downsides of AI, in particular the prospect of its “obsoleting” human workers) Vance responded sanguinely, arguing that ATM machines didn’t eliminate bank tellers. Indeed, Vance suggested, “we have more bank tellers today than we did when the ATM was created, but they’re doing slightly different work…”
There are two interesting things about what Vance said, both relating to the example that he chose about bank tellers and ATMs.
The first thing is what it tells us about who J. D. Vance is. The bank teller story—how ATMs were predicted to increase bank teller unemployment, but in fact did not—isn’t a story you’ll hear from politicians; in fact, for a long time, Barack Obama would claim, incorrectly, that ATMs had decreased the number of bank tellers, in order to suggest that the elevated unemployment rate during his presidency was due to productivity gains from technology. I’ve never heard a politician cite the bank teller story before: but I have seen the bank teller story cited in a lot of blogs. I’ve seen it cited, for example, by Scott Alexander and Matt Yglesias and Freddie deBoer; and I’ve heard it, upstream of the humble bloggers, from such fine economists as Daron Acemoglu and David Autor. The story of how ATMs didn’t automate bank tellers is, indeed, something of a minor parable of the economics profession…
… But the other thing about the bank teller story that Vance cites is that it’s wrong. We do not, contrary to what Vance claims, have “more bank tellers today than we did when the ATM was created”: we in fact have far fewer. The story he tells Douthat might have been true in 2000 or 2005, but it hasn’t been true for years. Bank teller employment has fallen off a cliff. Here is a graph of bank teller employment since 2000:
So what happened to bank tellers? Autor, Bessen, Vance, and the like are right to point out that ATMs did not reduce bank teller employment. But they miss the second half of the story, which is that another technology did. And that technology was the iPhone. The huge decline in bank teller employment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made possible.
But why? Why did the ATM, literally called the automated teller machine, not automate the teller, while an entirely orthogonal technology—the iPhone—actually did?
The answer, I think, is complementarity.
In my last piece, on why I don’t think imminent mass job loss from AI is likely, I talked a lot about complementarity. The core point I made was that labor substitution is about comparative advantage, not absolute advantage: the relevant question for labor impacts is not whether AI can do the tasks that humans can do, but rather whether the aggregate output of humans working with AI is inferior to what AI can produce alone. And I suggested that given the vast number of frictions and bottlenecks that exist in any human domain—domains that are, after all, defined around human labor in all its warts and eccentricities, with workflows designed around humans in mind—we should expect to see a serious gap between the incredible power of the technology and its impacts on economic life.
That gap will probably close faster than previous gaps did: AI is not “like” electricity or the steam engine; an AI system is literally a machine that can think and do things itself. But the gap exists, and will exist even as the technology continues to amaze us with what it can now accomplish.
But by talking about why ATMs didn’t displace bank tellers but iPhones did, I want to highlight an important corollary, which is that the true force of a technology is felt not with the substitution of tasks, but the invention of new paradigms. This is the famous lesson of electricity and productivity growth, which I’ll return to in a future piece. When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant…
[Oks unpacks the stories of the ATM’s and iPhone’s impact on banking, then looks ahead, by anaology, to what might be in store with AI. He concludes…]
… I am not a “denier” on the question of technological job loss; Vance’s blithe optimism is not mine. But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.
The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm. At some point in the life of every technology, old workflows are replaced by new ones, and we discover the paradigms in which the full productive force of a technology can best be expressed. In the past this has simply been a fact of managerial turnover or depreciation cycles. But with AI it will likely be the sheer power of the technology itself, which really is wholly unlike anything that has come before, and unlike electricity or the steam engine will eventually be able to build the structures that harness its powers by itself.
I don’t think we’ve really yet learned what those new structures will look like. But, at the limit, I don’t quite know why humans have to be involved in those: though I suspect that by the time we’re dealing with the fully-automated organizations of the future, our current set of concerns will have been largely outmoded by new and quite foreign ones, as has always been the case with human progress.
But, however optimistic I might be about the human future, I don’t think it’s worth leaning on the history of past technologies for comfort. The ATM parable is a comforting narrative; and in times of uncertainty and fear we search naturally for solace and comfort wherever it may come. But even when it comes to bank tellers, it’s only the first half of the story…
Eminently worth reading in full: “Why ATMs didn’t kill bank teller jobs, but the iPhone did.”
As to whether the wisdom of Amara and Oks is widely-shared, consider this from Crunchbase:
Crunchbase data shows global venture investment totaled $189 billion in February — the largest startup funding month on record — although 83% of capital raised went to just three companies. They include OpenAI, which raised $110 billion, also in the largest round ever raised by a private, venture-backed company.
The record month for venture funding took place against the backdrop of a trillion-dollar stock market drop as AI compute and tooling unsettled leading public software companies. [See also here.]
All told, venture investment was up close to 780% year over year from the $21.5 billion raised by startups in February 2025.
OpenAI was not the only company to raise tens of billions of dollars last month. Its closest rival, Anthropic, raised $30 billion, marking the third-largest venture round on record.
Waymo, Alphabet‘s self-driving division, raised $16 billion. Together, those three rounds totaled $156 billion, representing 83% of the global venture capital raised in February.
A further four companies each raised $1 billion or more last month: Tokyo-based semiconductor manufacturer Rapidus; London-based self-driving platform Wayve; San Francisco-based AI for robotics World Labs; and Sunnyvale, California-based AI semiconductor company Cerebras Systems.
These massive rounds were led by strategic corporate investors, a host of private equity and alternative investors, as well as a few multistage venture investors and a government agency…
– “Massive AI Deals Drive $189B Startup Funding Record In February While Public Software Stocks Reel“
As Carlota Perez explains in Technological Revolutions and Financial Capital, we’re forever blowing bubbles…
* Neil Postman
###
As we contemplate change, we might send sanitary, odor-free birthday greetings to Sir Joseph William Bazalgette; he was born on this date in 1819. A civil engineer, he became chief engineer of London’s Metropolitan Board of Works, in which role his major achievement was a response to the “Great Stink of 1858,” in July and August of 1858, during which very hot weather exacerbated the smell of untreated human waste and industrial effluent. Bazalgette oversaw the creation of a sewer network for central London which addressed the problem– and was instrumental in relieving the city from cholera epidemics, in beginning the cleansing of the River Thames, and in creating (a crucial part of) the infrastructure that underlay its extraordinary growth over the next century.
“Jobs in factories will come roaring back into our country”*…
When President Trump announced sweeping tariffs on “Liberation Day” last spring, the promise was that manufacturing– and the jobs it provides– would return to the U.S. Scott Lincicome (from the conservative Cato Institute) assesses the “progress” to date…
US manufacturing ended 2025 with a thud, capping a rough year for the sector. To recap, manufacturers shed 63,000 jobs, according to the latest data from the Bureau of Labor Statistics. It wasn’t just labor that was hurting. The Institute for Supply Management’s manufacturing index clocked in at 47.9 for December, marking the 10th consecutive month of contraction as new orders were especially weak and costs at historically elevated levels.
Then there’s the Federal Reserve’s Beige Book of regional economic conditions and surveys from the regional Fed banks, which have repeatedly documented cases of manufacturers delaying hiring and investment amid weak market conditions, rising costs, shrinking profit margins and persistent uncertainty. As for the “hard” data, manufacturing capacity and output, while incomplete, sagged through the Fall.
Overall, the evidence reveals a sector that’s stagnant at best, and a long way from the manufacturing renaissance President Donald Trump promised when he took office for a second time a year ago. No wonder administration officials have pivoted from predicting a factory boom in 2025 to now saying it will happen in 2026 and beyond.
Better tax, regulatory, and monetary policy should indeed provide a tailwind for manufacturing, but the sector will probably continue to struggle. If so, Trump’s tariffs will be a big reason why…
[Lincicome unpacks the several ways that Trump’s tariffs have confounded domestic manufacturing: increased costs (especially on materials/compnents not available in the U.S.) and tariff and policy/regulations that might be politely called “inconsistent” (or less politely, “flighty”); last year, the US tariff code was amended 50 times)– which has added management/coordination costs (Federal Reserve economists estimate that domestic manufacturers will pay $39 billion to $71 billion annually to comply with the new regime, representing time and money they can’t spend on their businesses); but perhaps even more damagingly, has created uncertainty that has slowed corporate action/investment. Lincicome concludes…]
… The harms to manufacturers are consistent with research on past tariff episodes and help to explain why the sector struggled in 2025 — and why things might not get much better this year. Recent forecasts also suggest caution, with manufacturers and supply chain professionals predicting continued headwinds due to the costs, uncertainty and complexity of tariffs. And the Supreme Court won’t save them. If it invalidates Trump’s “emergency” tariffs in the coming days, administration officials have promised to invoke alternate authorities to recreate them.
Global supply chains took years to develop. They’ll take even longer to reorganize and will do so at great cost if, that is, they don’t break altogether in the meantime…
“America’s Manufacturing Renaissance Is Missing in Action,” (gift article) by @scottlincicome.bsky.social in @opinion.bloomberg.com.
Relatedly, Trump’s immigration policy was (like the “manufacturing boom”) supposed to have reduced the federal deficit. The Administration is deporting immigrants at a brisk clip– but at an extraordinary cost, both economically and constitutionally. That’s not to mention the costs to the targeted immigrants themselves, to their familires and to the companies and economies of which they have been preponderantly positive and productive parts. Indeed, a different group at Cato recently published a thorough study demonstrating that– far from being a drag on the economy– immigrants have reduced federal (and state and local) deficits by $14.5 Trillion since 1994… though, of course that contribution is now, thanks to the ICE storm, slowing down.
The immigration crackdown was also supposed to turbo-charge job growth (for the U.S.-born); it has not. Indeed, the climate of fear and the difficulty in securing visas has led to a hiring boom abroad: “Silicon Valley can’t import talent like before. So it’s exporting jobs.”
It’s easy to see Trump’s election and the imposition of his economic and immigration policies as America’s Brexit. That abrupt rupture of social, cultural, and economic conventions is now about a decade old… and the results aren’t pretty…
Brexit, the United Kingdom’s decision to withdraw from the European Union, is a rare contemporary example of a major developed economy raising trade barriers and more generally pulling back from international economic integration. When the Brexit referendum took place in 2016, academic and professional economists generally forecast that the policy about-face would result in a negative hit to the United Kingdom’s economy of about 4% of GDP over the long-term. Rather than a sudden, visible economic shock following the vote, the costs of Brexit have been gradual and cumulative. Now, almost a decade later, new research aims to assess Brexit’s actual impact on the United Kingdom’s economy, which involves the challenging task of comparing the country’s economic indicators to what they would have been if the United Kingdom had remained in the European Union. This research finds that, ten years on, the economic cost of Brexit has been larger than analysts predicted and that prolonged policy uncertainty contributed importantly to the magnitude of the impact… We estimate that by 2025, Brexit had reduced UK GDP by 6% to 8%, with the impact accumulating gradually over time… Understanding the ways in which Brexit resulted in a drag on economic growth for the United Kingdom provides potential lessons about the costs of abruptly pulling back from the global economy for other countries… – “The Economic Costs of Brexit on the UK” (where there is much more detail)
* Donald Trump
###
As we interrogate empty promises (and lest we think that history doesn’t rhyme), we might recall that it was on this date in 1856 that the Know Nothing Party (dba, “the American Party” and “Native American Party”) convened in Philadelphia to nominate its first presidential candidate. A nativist (and largely anti-Catholic) group composed of anti-immigrant/Old Stock breakaways from the American Republican and Whig parties, the Know Nothings nominated Millard Fillmore.
The last member of the Whig Party to serve as President, Fillmore had been a Congressional Representative from New York who was elected to the Vice Presidency in 1848 on Zachary Taylor’s ticket. When Taylor died in 1850, Fillmore became the second V.P. to assume the presidency between elections.
Fillmore’s signature accomplishment was the passage of the Compromise of 1850 passed, a bargain that led to a brief truce in the battle over slavery– but was so ill-conceived (it contained the Fugitive Slave Act) and unpopular that Fillmore failed to get his own party’s nomination for President in the election of 1852, which he sat out. Unwilling to follow Lincoln into the new Republican Party, he got the nomination of the Know Nothings– though he was not a member of the party and hadn’t sought it; he was out of the country during the convention. Fillmore finished third in the 1856 election. By the 1860 election, the Know Nothings were no longer a serious national political movement.

“Humanity is acquiring all the right technology for all the wrong reasons”*…
Further to yesterday’s post on the poverty created by manufacturing displacement, and in the wake of the sturm und drang occasioned by the coup at OpenAI, the estimable Rana Foroohar on the politics of AI…
… Consider that current politics in the developed world — from the rise of Donald Trump to the growth of far right and far left politics in Europe — stem in large part from disruptions to the industrial workforce due to technology and globalisation. The hollowing out of manufacturing work led to more populist and fractious politics, as countries tried (and often failed) to balance the needs of the global marketplace with those of voters.
Now consider that this past summer, the OECD warned that white-collar, skilled labour representing about a third of the workforce in the US and other rich countries is most at risk from disruption by AI. We are already seeing this happen in office work — with women and Asians particularly at risk since they hold a disproportionate amount of roles in question. As our colleague John Burn-Murdoch has charted [image above], online freelancers are especially vulnerable.
So, what happens when you add more than three times as many workers, in new subgroups, to the cauldron of angry white men that have seen their jobs automated or outsourced in recent decades? Nothing good. I’m always struck when CEOs like Elon Musk proclaim that we are headed towards a world without work as if this is a good thing. As academics like Angus Deaton and Anne Case have laid out for some time now, a world without work very often leads to “deaths of despair,” broken families, and all sorts of social and political ills.
Now, to be fair, Goldman Sachs has estimated that the productivity impact of AI could double the recent rate — mirroring the impact of the PC revolution. This would lead to major growth which could, if widely shared, do everything from cut child poverty to reduce our burgeoning deficit.
But that’s only if it’s shared. And the historical trend lines for technology aren’t good in that sense — technology often widens wealth disparities before labour movements and government regulation equalise things. (Think about the turn of the 20th century, up until the 1930s). But the depth and breadth of AI disruption may well cause unprecedented levels of global labour displacement and political unrest.
I am getting more and more worried that this is where we may be heading. Consider this new National Bureau of Economic Research working paper, which analyses why AI will be as transformative as the industrial revolution. It also predicts, however, that there is a very good chance that it lowers the labour share radically, even pushing it to zero, in lieu of policies that prevent this (the wonderful Daron Acemoglu and Simon Johnson make similar points, and lay out the history of such tech transformation in their book Power and Progress…
We can’t educate ourselves out of this problem fast enough (or perhaps at all). We also can’t count on universal basic income to fix everything, no matter how generous it could be, because people simply need work to function (as Freud said, it’s all about work and love). Economists and political scientists have been pondering the existential risks of AI — from nuclear war to a pandemic — for years. But I wonder if the real existential crisis isn’t a massive crisis of meaning, and the resulting politics of despair, as work is displaced faster than we can fix the problem…
Everyone’s worried about AI, but are we worried about the right thing? “The politics of AI,” from @RanaForoohar in @FT.
See also: Henry Farrell‘s “What OpenAI shares with Scientology” (“strange beliefs, fights over money, and bad science fiction”) and Dave Karpf‘s “On OpenAI: Let Them Fight.” (“It’s chaos… And that’s a good thing.”)
For a different point-of-view, see: “OpenAI and the Biggest Threat in the History of Humanity,” from Tomás Pueyo.
And for deep background, read Benjamin Labatut‘s remarkable The MANIAC.
* R. Buckminster Fuller
###
As we equilibrate, we might recall that it was on this date in 1874 that electrical engineer, inventor, and physicist Ferdinand Braun published a paper in the Annalen der Physik und Chemie describing his discovery of the electrical rectifier effect, the original practical semiconductor device.
(Braun is better known for his contributions to the development of radio and television technology: he shared the 1909 Nobel Prize in Physics with Guglielmo Marconi “for their contributions to the development of wireless telegraphy” (Braun invented the crystal tuner and the phased-array antenna); was a founder of Telefunken, one of the pioneering communications and television companies; and (as the builder of the first cathode ray tube) has been called the “father of television” (shared with inventors like Paul Gottlieb Nipkow).
“Find a job you enjoy doing, and you will never have to work a day in your life”*…
Your correspondent is headed into another period of turbulence– travel, talk, meetings– this one, a little longer than the last; so (Roughly) Daily is about to go into another hiatus. Regular service should resume on or around October 8.
If only it were so easy… There is always a demand for more jobs. But what makes a job good? Tyler Re suggests that Kant has an answer…
Work is no longer working for us. Or, for most of us anyway. Citing lack of pay and promotion, more people are quitting their jobs now than at any time in the past 20 years. This is no surprise, considering that ‘real wages’ – the average hourly rate adjusted for inflation – for non-managers just three years ago was the same as it was in the early 1970s. At the same time, the increasing prominence of gig work has turned work from a steady ‘climb’ of the ladder into a precarious ‘hustle.’
…
The United States Department of Labor identifies a ‘good job’ as one with fair hiring practices, comprehensive benefits, formal equality of opportunity, job security and a culture in which workers are valued. In a similar UK report on the modern labour market called ‘Good Work’ (2017), Matthew Taylor and his colleagues emphasise workplace rights and fair treatment, opportunities for promotion, and ‘good reward schemes’. Finally, the UN’s Universal Declaration of Human Rights has two sections on work. They cite the free choice of employment and organization, fair and equal pay, and sufficient leisure time as rights of workers.
What all three of these accounts have in common is that they focus on features of jobs – the agreement you make with your boss to perform labour – rather than on the labour itself. The fairness of your boss, the length of your contract, the growth of your career – these specify nothing about the quality of the labour you perform. And yet it is the labour itself that we spend all day doing. The most tedious and unpleasant work could still pay a high salary, but we might not want to call such work ‘good’. (Only a brief mention is made in the Taylor report – which totals more than 100 pages – of the idea that workers ought to have some autonomy in how they perform their job, or that work ought not be tedious or repetitive.) This is not to say that the extrinsic aspects of work like pay and benefits are unimportant; of course, a good job is one that pays enough. But what about work’s intrinsic goods? Is there anything about the process of working itself that we ought to include in our list of criteria, or should we all be content with a life of high-paying drudgery?
Philosophers try to answer this question by giving a definition of work. Since definitions tell us what is essential or intrinsic to a thing, a definition of work would tell us whether there is anything intrinsic to work that we want our good jobs to promote. The most common definition of work in Western thought, found in nearly every period with recorded writing on the subject, is that work is inherently disagreeable and instrumentally valuable. It is disagreeable because it is an expenditure of energy (contrast this with leisure), and it is instrumentally valuable because we care only about the products of our labour, not the process of labouring itself. On this view, work has little to recommend it, and we would do better to minimise our time spent doing it. A theory of work based on this definition would probably say that good jobs pay a lot (in exchange for work’s disagreeableness) and are performed for as little time as possible.
But this is not the only definition at our disposal. Tucked away in two inconspicuous paragraphs of his book about beauty, the Critique of Judgment (1790), is Immanuel Kant’s definition of work. In a section called ‘On Art in General’, Kant gives a definition of art (Kunst in German) as a subset of our more general capacity for ‘skill’ or ‘craft’ (note that Kant’s definition should not be limited to the fine arts like poetry or painting, which is schöne Künste in German, which he addresses in the following section of the book). In other words, Kant defines art as a particular kind of skilled labour. Kant’s definition of art as skilled labour will direct us to the intrinsic features of work that we ought to include in our conception of good jobs…
Read on: “Freedom at Work,” in @aeonmag.
* Mark Twain
###
As we center satisfaction, we might recall that on this date in 1908, at the at the Ford Piquette Avenue Plant in Detroit, “Model T 001”– the first production Model T– rolled off the line. Generally regarded as the first mass-produced/mass-affordable automobile, it made car travel available to middle-class Americans– and became the avatar of assembly-line production and the type of jobs that it produces.
(On May 26, 1927, Henry Ford watched the 15 millionth Model T Ford roll off the assembly line at his factory in Highland Park, Michigan.)

“The roots of the word ‘compete’ are the Latin con petire, which meant ‘to seek together'”*…
Peter Thiel would have us believe that “competition is for losers.” The FTC begs to differ. Earlier this month, it introduced a proposed rule that would eliminate employee noncompete agreements. Scott Galloway explains why this is a very good plan…
… Yesterday’s iconoclasts pull the ladder up behind them the moment they become today’s icons. We’re in general agreement that “anti-competitive” behavior is bad, and have laws against it. Yet companies have been able to convince regulators to look the other way on an increasingly popular weapon of mass entrenchment. They’re passing out OxyContin during an AA meeting. The Oxy? Noncompete agreements…
Noncompete clauses are what firms use to sequester your human capital from competitors. When a new employee signs a noncompete with, say, Johnson & Johnson, they agree that when their employment ends, they won’t work at another pharmaceutical company for a designated period — usually one to two years. If you’re familiar with noncompetes, you likely associate them with technology jobs, where employers want to protect valuable intellectual property. And that’s the defense most often offered for the restrictions. BTW, the argument is bullshit … a confidentiality agreement does the trick.
The irony of noncompetes is they only serve to dampen growth. One of the few places where they’re banned is also home to the world’s most innovative tech economy: California. Job-hopping and seeding new acorns have been part of Silicon Valley since the beginning. In 1994 a Berkeley economist theorized that California’s ban on noncompetes was one of the main reasons Silicon Valley existed at all, and in 2005, economists at the Federal Reserve put forward statistical evidence supporting the theory. Apple, Disney, Google, Intel, Meta, Netflix, Oracle, and Tesla were able to succeed without limiting the options of their employees.
Yet outside California, corporate boardrooms love noncompetes. Historically they were attached only to high-skilled, high-paying jobs. Now they’re becoming ubiquitous across different industries at all levels. Fast-food workers are being forced to sign noncompetes, as are hairstylists and security guards. Roughly a third of minimum wage jobs in America now require such agreements. If forcing noncompetes on America’s lowest-paid workers sounds like indentured servitude, trust your instincts.
Employers claim noncompetes give them the assurance to pay for training and other investments in their employees. There is some evidence that noncompetes are associated with more worker training. But there’s a catch: They also decrease wages. The good news is we’ll train you to operate the fryer, the bad news is we won’t pay you a living wage to do it — and you can’t take a better job across the street.
The FTC estimates that noncompetes reduce employment opportunities for 30 million people and suppress wages by $300 billion per year. That’s far more than the total value of property stolen outright every year. Multiple studies also show that noncompetes reduce entrepreneurship and business formation. Which makes sense — it’s difficult to start a business when talent pools are not accessible or allocated to their best use. Downstream, the lack of competition leads to entrenchment, which eventually results in higher prices for consumers — as one study found has occurred in health care. Everybody loses. Except, of course, the incumbent’s shareholders…
It’s worth remembering the insight of W. Edwards Deming, one of the architects of Japan’s rise to industrial leadership after World War II: “In 1945, the world was in a shambles. American companies had no competition. So nobody really thought much about quality. Why should they? The world bought everything America produced. It was a prescription for disaster.”
The case against noncompete agreements: “Compete,” from @profgalloway.
See also: “Noncompete Agreements Reduce Worker Pay — and Overall Economic Activity” (source of the image above).
* Mihaly Csikszentmihalyi
###
As we remove the shackles, we might recall that it was on this date in 1915 that Ralph Chaplin, a Wobblie (a member of the Industrial Workers of the World) finished his poem “Solidarity Forever“– which, sung to the tune of “John Brown’s Body”/The Battle Hymn of the Republic,” has become a labor movement anthem.









You must be logged in to post a comment.