(Roughly) Daily

Posts Tagged ‘policy

“Agriculture engenders good sense, and good sense of an excellent kind”*…

In an influential 1943 essay, Polish economist Michał Kalecki staged a contest between capitalism’s pursuit of profit and its pursuit of power. While the benefits of government-sponsored full employment would benefit capitalists economically, Kalecki argued, it would also fundamentally threaten their social position—and the latter mattered more. If wide sections of the country came to believe that the government could replace the private sector as a source of investment and even hiring, capitalists would have to relinquish their role as the ultimate guardians of national economic health, and along with it their immense power over workers. Kalecki thus saw how the desire to maintain political dominance could override purely economic considerations.

This analysis finds a striking illustration in historian Ariel Ron’s award-winning new book Grassroots Leviathan, which advances a major reinterpretation of the contours of U.S. political economy and the origins of the U.S. developmental state—the government institutions that have played an active role in shaping economic and technological growth. In Ron’s revisionist account, the groundwork for the rapid economic development in the second half of the nineteenth century was less industrial and elite than agricultural and popular. “Despite the abiding myth that the Civil War pitted an industrial North against an agrarian South,” he writes, “the truth is that agriculture continued to dominate the economic, social, and cultural lives of the majority of Americans well into the late nineteenth century.” This central fact—at odds with familiar portraits of a dwindling rural population in the face of sweeping urban industrialization—carried with it shifting attitudes toward the state and the economy, dramatically altering the course of U.S. politics. Far from intrinsically opposed to government, a consequential strain of agrarianism welcomed state intervention and helped developed new ideas about the common good…

How a grassroots movement of American farmers laid the foundation for state intervention in the economy, embracing government investment and challenging the slaveholding South in the run-up to the Civil War: “In the Common Interest.”

Joseph Joubert


As we hone our history, we might recall that it was on this date in 1952 that Mylar was registered as a DuPont trademark. A very strong polyester film that has gradually replaced cellophane, Mylar is is put to many purposes, but main among them– given it’s strength, flexibility, and properties as an aroma barrier, it’s widely used in food packaging.


Written by LW

June 10, 2021 at 1:01 am

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”*…

There’s a depressing sort of symmetry in the fact that our modern paradigms of privacy were developed in response to the proliferation of photography and their exploitation by tabloids. The seminal 1890 Harvard Law Review article The Right to Privacy—which every essay about data privacy is contractually obligated to cite—argued that the right of an individual to object to the publication of photographs ought to be considered part of a general ‘right to be let alone’.

30 years on, privacy is still largely conceived of as an individual thing, wherein we get to make solo decisions about when we want to be left alone and when we’re comfortable being trespassed upon. This principle undergirds the notice-and-consent model of data management, which you might also know as the pavlovian response to click “I agree” on any popup and login screen with little regard for the forty pages of legalese you might be agreeing to.

The thing is, the right to be left alone makes perfect sense when you’re managing information relationships between individuals, where there are generally pretty clear social norms around what constitutes a boundary violation. Reasonable people can and do disagree as to the level of privacy they expect, but if I invite you into my home and you snoop through my bedside table and read my diary, there isn’t much ambiguity about that being an invasion.

But in the age of ✨ networked computing ✨, this individual model of privacy just doesn’t scale anymore. There are too many exponentially intersecting relationships for any of us to keep in our head. It’s no longer just about what we tell a friend or the tax collector or even a journalist. It’s the digital footprint that we often unknowingly leave in our wake every time we interact with something online, and how all of those websites and apps and their shadowy partners talk to each other behind our backs. It’s the cameras in malls tracking our location and sometimes emotions, and it’s the license plate readers compiling a log of our movements.

At a time when governments and companies are increasingly investing in surveillance mechanisms under the guise of security and transparency, that scale is only going to keep growing. Our individual comfort about whether we are left alone is no longer the only, or even the most salient part of the story, and we need to think about privacy as a public good and a collective value.

I like thinking about privacy as being collective, because it feels like a more true reflection of the fact that our lives are made up of relationships, and information about our lives is social and contextual by nature. The fact that I have a sister also indicates that my sister has at least one sibling: me. If I took a DNA test through 23andme I’m not just disclosing information about me but also about everyone that I’m related to, none of whom are able to give consent. The privacy implications for familial DNA are pretty broad: this information might be used to sell or withhold products and services, expose family secrets, or implicate a future as-yet-unborn relative in a crime. I could email 23andme and ask them to delete my records, and they might eventually comply in a month or three. But my present and future relatives wouldn’t be able to do that, or even know that their privacy had been compromised at all.

Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us. I might think nothing of posting a photo of going out with my friends and mentioning the name of the bar, but I’ve just exposed our physical location to the internet. If one of my friends has had to deal with a stalker in their past, I could’ve put their physical safety at risk. Even if I’m careful to make the post friends-only, the people I trust are not the same as the people my friends trust. In an individual model of privacy, we are only as private as our least private friend.

Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me.

Data collection isn’t always bad, but it is always risky. Sometimes that’s due to shoddy design and programming or lazy security practices. But even the best engineers often fail to build risk-free systems, by the very nature of systems.

Systems are easier to attack than they are to defend. If you want to defend a system, you have to make sure every part of it is perfectly implemented to guard against any possible vulnerabilities. Oftentimes, trying to defend a system means adding additional components, which just ends up creating more potential weak points. Whereas if you want to attack, all you have to do is find the one weakness that the systems designer missed. (Or, to paraphrase the IRA, you only have to be lucky once.)

This is true of all systems, digital or analog, but the thing that makes computer systems particularly vulnerable is that the same weaknesses can be deployed across millions of devices, in our phones and laptops and watches and toasters and refrigerators and doorbells. When a vulnerability is discovered in one system, an entire class of devices around the world is instantly a potential target, but we still have to go fix them one by one.

This is how the Equifax data leak happened. Equifax used a piece of open source software that had a security flaw in it, the people who work on that software found it and fixed it, and instead of diligently updating their systems Equifax hit the snooze button for four months and let hackers steal hundreds of millions of customer records. And while Equifax is definitely guilty of aforementioned lazy security practices, this incident also illustrates how fragile computer systems are. From the moment this bug was discovered, every server in the world that ran that software was at risk.

What’s worse, in many cases people weren’t even aware that their data was stored with Equifax. If you’re an adult who has had a job or a phone bill or interacted with a bank in the last seven years, your identifying information is collected by Equifax whether you like it or not. The only way to opt out would have been to be among the small percentage of overwhelmingly young, poor, and racialized people who have no credit histories, which significantly limits the scope of their ability to participate in the economy. How do you notice-and-consent your way out of that?

There unfortunately isn’t one weird trick to save democracy, but that doesn’t mean there aren’t lessons we can learn from history to figure out how to protect privacy as a public good. The scale and ubiquity of computers may be unprecedented, but so is the scale of our collective knowledge…

Read the full piece (and you should) for Jenny Zhang‘s (@phirephoenix) compelling case that we should treat– and protect– privacy as a public good, and explanation of how we might do that: “Left alone, together.” TotH to Sentiers.

[image above: source]

* Edward Snowden


As we think about each other, we might recall that it was on this date in 1939 that the first government appropriation was made to the support the construction of the Harvard Mark I computer.

Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. It was completed in 1944 (and put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project). 

The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds  The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.


“There’s class warfare, all right, but it’s my class, the rich class, that’s making war, and we’re winning”*…

In the past few decades, the Gini coefficient—a standard measure of income distribution across population segments—increased within most high-income economies. The United States remains the most unequal high-income economy in the world. The disparity reflects a surge in incomes for the richest population segments, along with sluggish or even falling incomes for the poorest, especially during bad economic times.

At the same time, the middle class is shrinking. The percent of Americans in the middle class has dropped since the 1970s, from 61 percent in 1971 to 51 percent in 2019. Some have moved up the income ladder, but an increasing number are also moving down. The middle class has also shrunk considerably in countries like Germany, Canada, and Sweden, but other advanced economies have generally experienced more modest declines.

From the introduction to the Petersen Institute for International Economics report “How to Fix Economic Inequality?

Founded by Pete Petersen (Lehman Brothers Chair, Nixon’s Secretary of Commerce, and co-founder, with Trump supporter Stephen Scharzman, of investment giant Blackstone), and overseen by trustees who include Larry Summers, Alan Greenspan, and George Schultz, PIIE is hardly a “progressive” think tank. But they are worried: quite apart from its obvious humanitarian toll, inequality at the scales that have emerged is highly unlikely to be sustainable (even at the human cost that we’ve so far been willing to pay). Put more bluntly, it is ever more likely to torpedo the domestic (and large hunks of the global) economy and indeed to threaten the stability of democratic society.

Other sources suggest that they have very good reason for concern:

• Even as the stock market hits new highs, 26 million Americans are suffering food insecurity (See also: “The boom in US GDP does not match what’s happening to Americans’ wallets.”

• The distribution of assets in the US (and other developed economies, but most egregiously in the U.S.) is even more skewed than income: see data in the PIIE report and “The Asset Economy.”

• And lest we think that this issue is confined to the U.S., social democracies throughout the developed world are feeling the same pressures (albeit mostly less dramatically).

FWIW, your correspondent doesn’t have terrifically strong confidence in the remedies mooted in the PIIE report. Even as the authors recognize that the issues are deeply structural, they confine themselves to recommending (what seem to your correspondent) relatively timid and incremental steps– which, even if taken (and most require legislative or regulatory action) are more likely to slow the polarization underway than to reverse it.

But they are worth contemplating, if only to provoke us to more fundamental measures (e.g., here). And in any case, it’s telling– and one can only hope, encouraging– that determined champions of the very neoliberal economics that have gotten us here recognize, at least, that unless we change course, we’re speeding into a dead end.

* Warren Buffett


As we agree that fair’s fair, we might recall that it was on this date in 2001 that Enron, once #7 in the Fortune 500, declared bankruptcy. Six months earlier, it’s stock had traded as high as $90; it closed November 30th at 26 cents, wiping out billions in wealth (a appreciable part of it disappearing from employees’ pension plans). At the time, Enron had $63.4 billion in assets, earning it the honor of being the nation’s largest bankruptcy to that date. (It would be surpassed by the WorldCom bankruptcy a year later.)

Jeff Skilling, Enron’s CEO served 11 years in prison on several counts of fraud; Andy Fastow, Enron’s CFO, would served about 5 years. Chairman Ken Lay was also found guilty, but died before his sentencing. Enron’s accounting firm, Arthur Andersen (at the time a leader among the “Big 5”), which at least “missed” the egregious fraudulent practices in their audits of Enron, was effectively forced to dissolve after the scandal.

Published a year before the scandal broke


“The Net is the new underlying infrastructure for civilization itself”*…




Most governments have traditionally argued that there are certain critical societal assets that should be built, managed, and controlled by public entities — think streets, airports, fire fighting, parks, policing, tunnels, an army. (And in just about every rich country except this one, access to and/or the provision of health care.) The choice to have, say, a city-owned park reflects two key facts: first, a civic judgment that having green outdoor spaces is important to the city; and second, that free parks open to all are unlikely to be produced by private companies driven by a motive for profit.

When it comes to the Internet we all live on, huge swaths of it are owned, controlled, and operated by private companies — companies like Facebook, Google, Amazon, Apple, Microsoft, and Twitter. In many cases, those companies’ public impacts aren’t in any significant conflict with their private motivations for profit. But in some cases… they are. Is there room for a public infrastructure that can offer an alternative to (or reduce the harm done by) those tech giants?

A diagnosis of the issue with a set of proposed remedies: “Public infrastructure isn’t just bridges and water mains: Here’s an argument for extending the concept to digital spaces.”

This article is based on a piece by Ethan Zuckerman, written for the Knight First Amendment Institute at Columbia, in which he lays out what he calls the case for digital public infrastructure. (He also published a summary of it here.)

Pair with this consideration of another piece of our political/social/economic “infrastructure,” corporate law, and its effects– contract, property, collateral, trust, corporate, and bankruptcy law, an “empire of law”: “How ‘Big Law’ Makes Big Money.”

* Doc Searles


As we contemplate the commons, we might recall that it was on this date in 1865 that the U.S. government dismantled a monstrous piece of “infrastructure” when Congress passed the Thirteenth Amendment to the United States Constitution and submitted it to the states for ratification.

The amendment abolished slavery with the declaration: “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”

Thomas Nast’s engraving, “Emancipation,” 1865



Written by LW

January 31, 2020 at 1:01 am

“Not with a bang, but a whimper”*…




What actually happens to workers when a company deploys automation? The common assumption seems to be that the employee simply disappears wholesale, replaced one-for-one with an AI interface or an array of mechanized arms.

Yet given the extensive punditeering, handwringing, and stump-speeching around the “robots are coming for our jobs” phenomenon—which I will never miss an opportunity to point out is falsely represented—research into what happens to the individual worker remains relatively thin. Studies have attempted to monitor the impact of automation on wages on aggregate or to correlate employment to levels of robotization.

But few in-depth investigations have been made into what happens to each worker after their companies roll out automation initiatives. Earlier this year, though, a paper authored by economists James Bessen, Maarten Goos, Anna Salomons, and Wiljan Van den Berge set out to do exactly that…

What emerges is a portrait of workplace automation that is ominous in a less dramatic manner than we’re typically made to understand. For one thing, there is no ‘robot apocalypse’, even after a major corporate automation event. Unlike mass layoffs, automation does not appear to immediately and directly send workers packing en masse.

Instead, automation increases the likelihood that workers will be driven away from their previous jobs at the companies—whether they’re fired, or moved to less rewarding tasks, or quit—and causes a long-term loss of wages for the employee.

The report finds that “firm-level automation increases the probability of workers separating from their employers and decreases days worked, leading to a 5-year cumulative wage income loss of 11 percent of one year’s earnings.” That’s a pretty significant loss.

Worse still, the study found that even in the Netherlands, which has a comparatively generous social safety net to, say, the United States, workers were only able to offset a fraction of those losses with benefits provided by the state. Older workers, meanwhile, were more likely to retire early—deprived of years of income they may have been counting on.

Interestingly, the effects of automation were felt similarly through all manner of company—small, large, industrial, services-oriented, and so on. The study covered all non-finance sector firms, and found that worker separation and income loss were “quite pervasive across worker types, firm sizes and sectors.”

Automation, in other words, forces a more pervasive, slower-acting and much less visible phenomenon than the robots-are-eating-our-jobs talk is preparing us for…

The result, Bessen says, is an added strain on the social safety net that it is currently woefully unprepared to handle. As more and more firms join the automation goldrush—a 2018 McKinsey survey of 1,300 companies worldwide found that three-quarters of them had either begun to automate business processes or planned to do so next year—the number of workers forced out of firms seems likely to tick up, or at least hold steady. What is unlikely to happen, per this research, is an automation-driven mass exodus of jobs.

This is a double-edged sword: While it’s obviously good that thousands of workers are unlikely to be fired in one fell swoop when a process is automated at a corporation, it also means the pain of automation is distributed in smaller, more personalized doses, and thus less likely to prompt any sort of urgent public response. If an entire Amazon warehouse were suddenly automated, it might spur policymakers to try to address the issue; if automation has been slowly hurting us for years, it’s harder to rally support for stemming the pain…

Brian Merchant on the ironic challenge of addressing the slow-motion, trickle-down social, economic, and cultural threats of automation– that they will accrue gradually, like erosion, not catastrophically… making it harder to generate a sense of urgency around creating a response: “There’s an Automation Crisis Underway Right Now, It’s Just Mostly Invisible.”

* T. S. Eliot, “The Hollow Men”


As we think systemically, we might recall that it was on this date in 1994 that Ken McCarthy, Marc Andreessen, and Mark Graham held the first conference to focus on the commercial potential of the World Wide Web.



Written by LW

November 5, 2019 at 1:01 am

%d bloggers like this: