(Roughly) Daily

Posts Tagged ‘L.M. Sacasas

“If we are to prevent megatechnics from further controlling and deforming every aspect of human culture, we shall be able to do so only with the aid of a radically different model derived directly, not from machines, but from living organisms and organic complexes (ecosystems)”*…

In a riff on Lewis Mumford, the redoubtable L. M. Sacasas addresses the unraveling of modernity…

The myth of the machine underlies a set of three related and interlocking presumptions which characterized modernity: objectivity, impartiality, and neutrality. More specifically, the presumptions that we could have objectively secured knowledge, impartial political and legal institutions, and technologies that were essentially neutral tools but which were ordinarily beneficent. The last of these appears to stand somewhat apart from the first two in that it refers to material culture rather than to what might be taken as more abstract intellectual or moral stances. In truth, however, they are closely related. The more abstract intellectual and institutional pursuits were always sustained by a material infrastructure, and, more importantly, the machine supplied a master template for the organization of human affairs.

Just as the modern story began with the quest for objectively secured knowledge, this ideal may have been the first to lose its implicit plausibility. Since the late 19th century onward, philosophers, physicists, sociologists, anthropologists, psychologists, and historians have, among others, proposed a more complex picture that emphasized the subjective, limited, contingent, situated, and even irrational dimensions of how humans come to know the world. The ideal of objectively secured knowledge became increasingly questionable throughout the 20th century. Some of these trends get folded under the label “postmodernism,” but I found the term unhelpful at best a decade ago—now find it altogether useless.

We can similarly trace a growing disillusionment with the ostensible impartiality of modern institutions. This takes at least two forms. On the one hand, we might consider the frustrating and demoralizing character of modern bureaucracies, which we can describe as rule-based machines designed to outsource judgement and enhance efficiency. On the other, we can note the heightened awareness of the actual failures of modern institutions to live up to the ideals of impartiality, which has been, in part, a function of the digital information ecosystem.

But while faith in the possibility of objectively secured knowledge and impartial institutions faltered, the myth of the machine persisted in the presumption that technology itself was fundamentally neutral. Until very recently, that is. Or so it seems. And my thesis (always for disputation) is that the collapse of this last manifestation of the myth brings the whole house down. This in part because of how much work the presumption of technological neutrality was doing all along to hold American society together. (International readers: as always read with a view to your own setting. I suspect there are some areas of broad overlap and other instances when my analysis won’t travel well). Already by the late 19th century, progress had become synonymous with technological advancements, as Leo Marx argued. If social, political, or moral progress stalled, then at least the advance of technology could be counted on…

But over the last several years, the plausibility of this last and also archetypal manifestation of the myth of the machine has also waned. Not altogether, to be sure, but in important and influential segments of society and throughout a wide cross-section of society, too. One can perhaps see the shift most clearly in the public discourse about social media and smart phones, but this may be a symptom of a larger disillusionment with technology. And not only technological artifacts and systems, but also with the technocratic ethos and the public role of expertise.

If the myth of the machine in these three manifestations, was, in fact, a critical element of the culture of modernity, underpinning its aspirations, then when each in turn becomes increasingly implausible the modern world order comes apart. I’d say that this is more or less where we’re at. You could usefully analyze any number of cultural fault lines through this lens. The center, which may not in fact hold, is where you find those who still operate as if the presumptions of objectivity, impartiality, and neutrality still compelled broad cultural assent, and they are now assailed from both the left and the right by those who have grown suspicious or altogether scornful of such presumptions. Indeed, the left/right distinction may be less helpful than the distinction between those who uphold some combination of the values of objectivity, impartiality, and neutrality and those who no longer find them compelling or desirable.

What happens when the systems and strategies deployed to channel often violent clashes within a population deeply, possibly intractably divided about substantive moral goods and now even about what Arendt characterized as the publicly accessible facts upon which competing opinions could be grounded—what happens when these systems and strategies fail?

It is possible to argue that they failed long ago, but the failure was veiled by an unevenly distributed wave of material abundance. Citizens became consumers and, by and large, made peace with the exchange. After all, if the machinery of government could run of its own accord, what was their left to do but enjoy the fruits of prosperity. But what if abundance was an unsustainable solution, either because it taxed the earth at too high a rate or because it was purchased at the cost of other values such as rootedness, meaningful work and involvement in civic life, abiding friendships, personal autonomy, and participation in rich communities of mutual care and support? Perhaps in the framing of that question, I’ve tipped my hand about what might be the path forward.

At the heart of technological modernity there was the desire—sometimes veiled, often explicit—to overcome the human condition. The myth of the machine concealed an anti-human logic: if the problem is the failure of the human to conform to the pattern of the machine, then bend the human to the shape of the machine or eliminate the human altogether. The slogan of the one of the high-modernist world’s fairs of the 1930s comes to mind: “Science Finds, Industry Applies, Man Conforms.” What is now being discovered in some quarters, however, is that the human is never quite eliminated, only diminished…

Eminently worth reading in full: “The Myth of the Machine, ” from @LMSacasas.

For a deep dive into similar waters, see John Ralston Saul‘s (@JohnRalstonSaul) Voltaire’s Bastards.

[Image above: source]

* Lewis Mumford, The Myth of the Machine


As we rethink rudiments, we might recall that it was on this date in 1919 that Arthur Eddington confirmed Einstein’s light-bending prediction– a part of The Theory of General Relativity– using photos of a solar eclipse. Eddington’s paper the following year was the “debut” of Einstein’s theoretical work in most of the English-speaking world (and occasioned an urban legend: when a reporter supposedly suggested that “only three people understand relativity,” Eddington was supposed to have jokingly replied “Oh, who’s the third?”)

One of Eddington’s photographs of the total solar eclipse of 29 May 1919, presented in his 1920 paper announcing its success, confirming Einstein’s theory that light “bends”

“Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics?”*…

Half a century ago, Lewis Mumford developed a concept that explains why we trade autonomy for convenience…

… Surveying the state of the high-tech life, it is tempting to ponder how it got so bad, while simultaneously forgetting what it was that initially convinced one to hastily click “I agree” on the terms of service. Before certain social media platforms became foul-smelling swamps of conspiratorial misinformation, many of us joined them for what seemed like good reasons; before sighing at the speed with which their batteries die, smartphone owners were once awed by these devices: before grumbling that there was nothing worth watching, viewers were astounded by how much streaming content was available at one’s fingertips. Overwhelmed by the way today’s tech seems to be burying us in the bad, it’s easy to forget the extent to which tech won us over by offering us a share in the good — or to be more precise, in “the goods.” 

Nearly 50 years ago, long before smartphones and social media, the social critic Lewis Mumford put a name to the way that complex technological systems offer a share in their benefits in exchange for compliance. He called it a “bribe.” With this label, Mumford sought to acknowledge the genuine plentitude that technological systems make available to many people, while emphasizing that this is not an offer of a gift but of a deal. Surrender to the power of complex technological systems — allow them to oversee, track, quantify, guide, manipulate, grade, nudge, and surveil you — and the system will offer you back an appealing share in its spoils. What is good for the growth of the technological system is presented as also being good for the individual, and as proof of this, here is something new and shiny. Sure, that shiny new thing is keeping tabs on you (and feeding all of that information back to the larger technological system), but it also lets you do things you genuinely could not do before. For a bribe to be accepted it needs to promise something truly enticing, and Mumford, in his essay “Authoritarian and Democratic Technics,” acknowledged that “the bargain we are being asked to ratify takes the form of a magnificent bribe.” The danger, however, was that “once one opts for the system no further choice remains.” 

For Mumford, the bribe was not primarily about getting people into the habit of buying new gadgets and machines. Rather it was about incorporating people into a world that complex technological systems were remaking in their own image. Anticipating resistance, the bribe meets people not with the boot heel, but with the gift subscription.

The bribe is a discomforting concept. It asks us to consider the ways the things we purchase wind up buying us off, it asks us to see how taking that first bribe makes it easier to take the next one, and, even as it pushes us to reflect on our own complicity, it reminds us of the ways technological systems eliminate their alternatives. Writing about the bribe decades ago, Mumford was trying to sound the alarm, as he put it: “This is not a prediction of what will happen, but a warning against what may happen.” As with all of his glum predictions, it was one that Mumford hoped to be proven wrong about. Yet as one scrolls between reviews of the latest smartphone, revelations about the latest misdeeds of some massive tech company, and commentary about the way we have become so reliant on these systems that we cannot seriously speak about simply turning them off — it seems clear that what Mumford warned “may happen” has indeed happened…

Eminently worth reading in full: “The Magnificent Bribe,” by Zachary Loeb in @_reallifemag.

As to (some of) the modern implications of that bargain, see also Shoshana Zuboff‘s: “You Are the Object of a Secret Extraction Operation.”

As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information. The promise of the surveillance dividend now draws surveillance economics into the “normal” economy, from insurance, retail, banking and finance to agriculture, automobiles, education, health care and more. Today all apps and software, no matter how benign they appear, are designed to maximize data collection.

Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic. The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage…

And resonantly: “AI-tocracy” a working paper from NBER that links the development of artificial intelligence with the interests of autocracies: from the abstract:

Can frontier innovation be sustained under autocracy? We argue that innovation and autocracy can be mutually reinforcing when: (i) the new technology bolsters the autocrat’s power; and (ii) the autocrat’s demand for the technology stimulates further innovation in applications beyond those benefiting it directly. We test for such a mutually reinforcing relationship in the context of facial recognition AI in China. To do so, we gather comprehensive data on AI firms and government procurement contracts, as well as on social unrest across China during the last decade. We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats’ suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.

(And, Anne Applebaum warns, “The Bad Guys Are Winning.”)

* “Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics? The answer to this question is both paradoxical and ironic. Present day technics differs from that of the overtly brutal, half-baked authoritarian systems of the past in one highly favorable particular: it has accepted the basic principle of democracy, that every member of society should have a share in its goods. By progressively fulfilling this part of the democratic promise, our system has achieved a hold over the whole community that threatens to wipe out every other vestige of democracy.

The bargain we are being asked to ratify takes the form of a magnificent bribe. Under the democratic-authoritarian social contract, each member of the community may claim every material advantage, every intellectual and emotional stimulus he may desire, in quantities hardly available hitherto even for a restricted minority: food, housing, swift transportation, instantaneous communication, medical care, entertainment, education. But on one condition: that one must not merely ask for nothing that the system does not provide, but likewise agree to take everything offered, duly processed and fabricated, homogenized and equalized, in the precise quantities that the system, rather than the person, requires. Once one opts for the system no further choice remains. In a word, if one surrenders one’s life at source, authoritarian technics will give back as much of it as can be mechanically graded, quantitatively multiplied, collectively manipulated and magnified.”

– Lewis Mumford in “Authoritarian and Democratic Technics,” via @LMSacasas


As we untangle user agreements, we might recall that it was on this date in 1970 that Douglas Engelbart (see here, here, and here) was granted a patent (US No. 3,541,541) on the “X-Y Position Indicator for a Display System,” the world’s first prototype computer mouse– a wooden block containing the tracking apparatus, with a single button attached.



“To pay attention, this is our endless and proper work”*…

The Attention Economy…

“Attention discourse” is how I usually refer to the proliferation of essays, articles, talks, and books around the problem of attention (or, alternatively, distraction) in the age of digital media. While there have been important precursors to digital age attention discourse dating back to the 19th century, I’d say the present iteration probably kicked off around 2008 with Nick Carr’s essay in the Atlantic, “Is Google Making Us Stupid?” And while disinformation discourse has supplanted its place in the public imagination over the past few years, attention discourse is alive and well…

Attention discourse proceeds under the sign of scarcity. It treats attention as a resource, and, by doing so, maybe it has given up the game. To speak about attention as a resource is to grant and even encourage its commodification. If attention is scarce, then a competitive attention economy flows inevitably from it. In other words, to think of attention as a resource is already to invite the possibility that it may be extracted. Perhaps this seems like the natural way of thinking about attention, but, of course, this is precisely the kind of certainty [Ivan Illich] invited us to question…  

His crusade against the colonization of experience by economic rationality led him not only to challenge the assumption of scarcity and defend the realm of the vernacular, he also studiously avoided the language of “values” in favor of talk about the “good.” He believed that the good could be established by observing the requirements of proportionality or complementarity in a given moment or situation. The good was characterized by its fittingness. Illich sometimes characterized it as a matter of answering a call as opposed to applying a rule. 

“The transformation of the good into values,” he answers, “of commitment into decision, of question into problem, reflects a perception that our thoughts, our ideas, and our time have become resources, scarce means which can be used for either of two or several alternative ends. The word value reflects this transition, and the person who uses it incorporates himself in a sphere of scarcity.”

A little further on in the conversation, Illich explains that value is “a generalization of economics. It says, this is a value, this is a nonvalue, make a decision between the two of them. These are three different values, put them in precise order.” “But,” he goes on to explain, “when we speak about the good, we show a totally different appreciation of what is before us. The good is convertible with being, convertible with the beautiful, convertible with the true.”…

Your Attention Is Not a Resource“: L.M. Sacasas (@LMSacasas) wields Illich to argue that “you and I have exactly as much attention as we need.”

(image above: source)

* Mary Oliver


As we go for the good, we might recall that it was on his date in 1965 that NASA launched Hughes Aircraft’s Early Bird (now known officially as Intelsat I) into orbit. It was the first communications satellite to be placed in synchronous earth orbit– and successfully demonstrated their (subsequently explosively growing) use for commercial communications.

“Early Bird” being prepared


%d bloggers like this: