(Roughly) Daily

Posts Tagged ‘social media

“I never wonder to see men wicked, but I often wonder to see them not ashamed”*…

Perhaps because they– we– are not, there has arisen a culture of shaming. Charlie Tyson considers the rise of online humiliation…

“Men punish with shame,” wrote the sixteenth-century poet Thomas Wyatt. It is the “greatest punishment on earth, yea! greater than death.” Other forms of punishment—torture, solitary confinement—may do more to break the body and spirit. But the primitive power of shaming, and the reliability with which shame punishments are administered informally by the community as well as formally by the state, make it an especially disturbing mode of discipline. The ubiquity of shame punishments across many cultures—from the penal tattooing of slaves and criminals in ancient Rome to the stocks, pillory, and cucking stool of early modern England to the practice in modern China, only recently outlawed, of roping together suspected sex workers and forcing them to march barefoot through the streets—alerts us to the likelihood that we are dealing with a human propensity that can never be banished, only contained.

An ambient culture of shame saturates the online social environment. On such platforms as Twitter or TikTok or YouTube the risk of humiliation is ever present. Some online performers have neutralized the threat of cringe through stylized self-embarrassment: comedians riff on their own narcissism; dancers engage in cartoonish slapstick, reminiscent of Buster Keaton or Charlie Chaplin (as if, on the internet, the history of cinema is replaying itself ), ensuring that they pie themselves in the face before anyone else can. The rest of us, fated to play “ourselves” before an unknown and fickle audience, must improvise other defenses.

Cancel culture, callouts, online harassment, mob justice, accountability: all of these terms refer to structurally similar phenomena (the targeting of the one by the many, in front of an audience), yet none offers a neutral description. What is decried as “cancel culture” is sometimes just spirited criticism; what is endorsed as “accountability” is sometimes gratuitous and cruel. Given the confusion and sophistry that mar discussion of online shaming, it is worth keeping two facts in mind. The first is that, regardless of one’s views about the merits of shaming in any one case, we have devised a social-technological structure in which persons can be selected virtually at random and held up for the scorn of thousands, as in the cases Jon Ronson recounted in his 2015 book So You’ve Been Publicly Shamed. The second is that these shame-storms occur not in a public square, as Twitter is sometimes misleadingly dubbed, but in spaces controlled by private capital. “Egged on by algorithms,” Cathy O’Neil writes in her book The Shame Machine, “millions of us participate in these dramas, providing the tech giants with free labor.” Pile-ons increase engagement. Our fury pads the purses of tech capitalists.

Skepticism about public shaming was once widely shared by leftists and liberals, on the grounds that shaming threatens dignity and tends to target stigmatized groups. Article 12 of the UN’s Universal Declaration of Human Rights states that people deserve protection from attacks on their “honour and reputation.” Shame campaigns might be deployed effectively, and justly, in response to harms committed by corporations or governments. But shaming citizens was another matter. A good society was supposed to defend its members from humiliation.

These days, shaming is more in vogue. Many commentators on the left, while rejecting the shaming of vulnerable groups (queer people, poor people, people with disabilities), see the technique as a valuable way of shoring up social norms. Some argue that it’s an effective response to racist and sexist behavior. Tressie McMillan Cottom recently argued in The New York Times that shaming is a corrective to a white-dominated culture: against the backdrop of a more open and diverse public square, “shame is evidence of a democratic society operating democratically.”

Yet in its insistence on conformity, shaming, even when harnessed for ostensibly progressive ends, has a conservative flavor. Indeed, though the American right may complain about cancel culture, it has an undeniable taste for public shaming. The right-wing Twitter account Libs of TikTok, for instance, has gained more than a million followers by holding up queer and trans people as objects of disgust. The account’s method is to rip videos from TikTok (featuring, say, gender-fluid teenagers talking about their pronouns), a strategy that should remind us that our theater of shame is not a single toxic website but an entire networked architecture. Conservatives have also enlisted the force of law to shame transgender people, as with bills mandating genital exams for young athletes whose gender is disputed. The ascent of Donald Trump, whose principal qualifications seemed to be his immunity to shame and his gusto for shaming others (as when he mocked a reporter’s disability and taunted Michael Bloomberg for being short), confirms the political resonance of shame in our present moment.

Structural problems in how the online world is organized have also deformed our thinking about shame. The most popular social-media sites are commercial platforms flooded with advertising and propaganda and run by black-box algorithms that exploit shaming campaigns to boost user engagement. A neutral public square this is not. The wide reach of digital life means that one’s reputation can be muddied in a matter of minutes; the speed and scale at which this can take place make today’s online shaming dynamics different from past forms of shame punishment. Technology companies have handed us weapons of reputational damage that are invariably set to hair-trigger alert. The result is an atmosphere of surveillance in which the threat of humiliation has emerged as an effective tool of social control…

A provocative analysis, eminently worth reading in full: “Theater of Shame,’ from @CharlieTyson1 in @YaleReview.

(Image above: source)

* Jonathan Swift

###

As we mind our manners, we might recall that it was on this date in 1896 that Richard F. Outcault‘s comic strip Hogan’s Alley— featuring “the Yellow Kid” (Mickey Dugan)– debuted in William Randolph Hearst’s New York Journal. While “the Yellow Kid” had appeared irregularly before, it was the first the first full-color comic to be printed regularly (many historians suggest), and one of the earliest in the history of the comic; Outcault’s use of word balloons in the Yellow Kid influenced the basic appearance and use of balloons in subsequent newspaper comic strips and comic books. Outcault’s work aimed at humor and social commentary; but (perhaps ironically) the concept of “yellow journalism” referred to stories which were sensationalized for the sake of selling papers (as in the publications of Hearst and Joseph Pulitzer, an earlier home to sporadic appearances of the Yellow Kid) and was so named after the “Yellow Kid” cartoons.

source

Written by (Roughly) Daily

October 18, 2022 at 1:00 am

“In our world of big names, curiously, our true heroes tend to be anonymous”*…

Pattie Maes was inventing the core principles behind the social media age when Mark Zuckerberg was still in kindergarten, but her contributions been largely unrecognized. Steven Johnson explains…

Anyone who was around for the early days of the World Wide Web, before the Netscape IPO and the dotcom boom, knows that there was a strange quality to the medium back then – in many ways the exact opposite of the way the Web works today. It was oddly devoid of people. Tim Berners-Lee had conjured up a radically new way of organizing information through the core innovations of hypertext and URLs, which created a standardized way of pointing to the location of documents. But almost every Web page you found yourself on back in those frontier days was frozen in the form that its author had originally intended. The words on the screen couldn’t adapt to your presence and your interests as you browsed. Interacting with other humans and having conversations – all that was still what you did with email or USENET or dial-up bulletin boards like The Well. The original Web was more like a magic library, filled with pages that could connect to other pages through miraculous wormholes of links. But the pages themselves were fixed, and everyone browsed alone.

One of the first signs that the Web might eventually escape those confines arrived in the last months of 1994, with the release of an intriguing (albeit bare-bones) prototype called HOMR, short for the Helpful Online Music Recommendation service.

HOMR was one of a number of related projects that emerged in the early-to-mid-90s out of the MIT lab of the Belgian-born computer scientist Pattie Maes, projects that eventually culminated in a company that Maes co-founded, called Firefly. HOMR pulled off a trick that was genuinely unprecedented at the time: it could make surprisingly sophisticated recommendations of music that you might like. It seemed to be capable of learning something about you as an individual. Unlike just about everything else on the Web back then, HOMR’s pages were not one-size-fits all. They suggested, perhaps for the first time, that this medium was capable of conveying personalized information. Firefly would then take that advance to the next level: not just recommending music, but actually connecting you to other people who shared your tastes.

Maes called the underlying approach “collaborative filtering”, but looking back on it with more than two decades’ worth of hindsight, it’s clear that what we were experiencing with HOMR and Firefly was the very beginnings of a new kind of software platform that would change the world in the coming decades, for better and for worse: social networks…

Read on at “Intelligent Agent: How Pattie Maes almost invented social media,” from @stevenbjohnson, the first in a new series, “Hidden Heroes.”

* Daniel J. Boorstin

###

As we give credit where credit is due, we might recall that it was on this date in 1960 that the FDA approved the first birth control pill– an oral medication for use by women as a contraceptive. In 1953, birth control crusader Margaret Sanger and her supporter/financier, philanthropist Katharine Dexter McCormick had given Dr. Gregory Pincus $150,000 to continue his prior research and develop a safe and effective oral contraceptive for women.

In just five years, almost half of married women on birth control were using it.

But the real revolution would come when unmarried women got access to oral contraceptives. That took time. But in around 1970 – 10 years after the pill was first approved – US state after US state started to make it easier for single women to get the pill…

And that was when the economic revolution really began.

Women in America started studying particular kinds of degrees – law, medicine, dentistry and MBAs – which had previously been very masculine.

In 1970, medical degrees were over 90% male. Law degrees and MBAs were over 95% male. Dentistry degrees were 99% male. But at the beginning of the 1970s – equipped with the pill – women surged into all these courses. At first, women made up a fifth of the class, then a quarter. By 1980 they often made up a third…

The tiny pill which gave birth to an economic revolution,” by Tim Harford, in the BBC’s series 50 Things That Made the Modern Economy

source

“You say you’re a pessimist, but I happen to know that you’re in the habit of practicing your flute for two hours every evening”*…

The Harrowing of Hell, Hieronymus Bosch

A couple of weeks ago, (R)D featured a piece by Jonathan Haidt, “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” in which Haidt critiqued, among others, Robert Wright and his influential book, Non-Zero. In the spirit of George Bernard Shaw (who observed: “Both optimists and pessimists contribute to society. The optimist invents the aeroplane, the pessimist the parachute.“) Wright responds…

… There are three main culprits in Haidt’s story, three things that have torn our world asunder: the like button, the share button (or, on Twitter, the retweet button), and the algorithms that feed on those buttons. “Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the country’s future—and to us as a people.”

I would seem uniquely positioned to cheer us up by taking issue with Haidt’s depressing diagnosis. Near the beginning of his piece, he depicts my turn-of-the-millennium book Nonzero: The Logic of Human Destiny as in some ways the antithesis of his thesis—as sketching a future in which information technology unites rather than divides…

Well, two things I’m always happy to do are (1) cheer people up; and (2) defend a book I’ve written. I’d like to thank Haidt (who is actually a friend—but whom I’ll keep calling “Haidt” to lend gravitas to this essay) for providing me the opportunity to do both at once.

But don’t let your expectations get too high about the cheering people up part—because, for starters, the book I’m defending wasn’t that optimistic. I wrote in Nonzero, “While I’m basically optimistic, an extremely bleak outcome is obviously possible.” And even if we avoid a truly apocalyptic fate, I added, “several moderately bleak outcomes are possible.”

Still, looking around today, I don’t see quite as much bleakness as Haidt seems to see. And one reason, I think, is that I don’t see the causes of our current troubles as being quite as novel as he does. We’ve been here before, and humankind survived…

Read on for a brief history of humankind’s wrestling with new information technologies (e.g., writing and the printing press). Wright concludes…

In underscoring the importance of working to erode the psychology of tribalism (a challenge approachable from various angles, including one I wrote a book about), I don’t mean to detract from the value of piecemeal reforms. Haidt offers worthwhile ideas about how to make social media less virulent and how to reduce the paralyzing influence of information technology on democracy. (He spends a lot of time on the info tech and democracy issue—and, once again, I’d say he’s identified a big problem but also a longstanding problem; I wrote about it in 1995, in a Time magazine piece whose archival version is mis-dated as 2001.) The challenge we face is too big to let any good ideas go to waste, and Haidt’s piece includes some good ones.

Still, I do think that stepping back and looking at the trajectory of history lets us assess the current turmoil with less of a sense of disorientation than Haidt seems to feel. At least, that’s one takeaway from my argument in Nonzero, which chronicled how the evolution of technology, especially information technology, had propelled human social organization from the hunter-gatherer village to the brink of global community—a threshold that, I argued, we will fail to cross at our peril.

This isn’t the place to try to recapitulate that argument in compelling form. (There’s a reason I devoted a whole book to it.) So there’s no reason the argument should make sense to you right now. All I can say is that if you do ever have occasion to assess the argument, and it does make sense to you, the turbulence we’re going through will also make more sense to you.

Is Everything Falling Apart?@JonHaidt thinks so; @robertwrighter is not so sure.

Apposite: “An optimist’s guide to the future: the economist who believes that human ingenuity will save the world,” and “The Future Will Be Shaped by Optimists,” from @kevin2kelly at @TedConferences.

* Friedrich Nietzsche (criticizing Schopenhauer)

###

As we look on the bright side of life, we might send darkly-tinted birthday greetings to Oswald Spengler; he was born on this date in 1880. Best known for his two-volume work, The Decline of the West (Der Untergang des Abendlandes), published in 1918 and 1922, he was a historian and philosopher of history who developed an “organic theory” of history that suggested that human cultures and civilizations are akin to biological entities, each with a limited, predictable, and deterministic lifespan– and that around the year 2000, Western civilization would enter the period of pre‑death emergency whose countering would lead to 200 years of Caesarism (extra-constitutional omnipotence of the executive branch of government) before Western civilization’s final collapse. He was a major influence on many historians (including Arnold Toynbee and Samuel “Clash of Civilizations” Huntington).

source

“Our social tools are not an improvement to modern society, they are a challenge to it”*…

Nicolás Ortega. Source: “Turris Babel,” Coenraet Decker, 1679

Jonathan Haidt ponders the poisonous impact of social media, arguing that “It’s not just a phase,” and what considers we might do about it…

… It’s been clear for quite a while now that red America and blue America are becoming like two different countries claiming the same territory, with two different versions of the Constitution, economics, and American history. But Babel is not a story about tribalism; it’s a story about the fragmentation of everything. It’s about the shattering of all that had seemed solid, the scattering of people who had been a community. It’s a metaphor for what is happening not only between red and blue, but within the left and within the right, as well as within universities, companies, professional associations, museums, and even families.

Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the country’s future—and to us as a people. How did this happen? And what does it portend for American life?

The high point of techno-democratic optimism was arguably 2011, a year that began with the Arab Spring and ended with the global Occupy movement. That is also when Google Translate became available on virtually all smartphones, so you could say that 2011 was the year that humanity rebuilt the Tower of Babel. We were closer than we had ever been to being “one people,” and we had effectively overcome the curse of division by language. For techno-democratic optimists, it seemed to be only the beginning of what humanity could do.

In February 2012, as he prepared to take Facebook public, Mark Zuckerberg reflected on those extraordinary times and set forth his plans. “Today, our society has reached another tipping point,” he wrote in a letter to investors. Facebook hoped “to rewire the way people spread and consume information.” By giving them “the power to share,” it would help them to “once again transform many of our core institutions and industries.”

In the 10 years since then, Zuckerberg did exactly what he said he would do. He did rewire the way we spread and consume information; he did transform our institutions, and he pushed us past the tipping point. It has not worked out as he expected…

Social media and society: “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” from @JonHaidt in @TheAtlantic. Eminently worth reading in full.

See also: “The big idea: how to win the fight against disinformation.”

* Clay Shirky

###

As we follow Jaron Lanier‘s advice to “go to where you are kindest,” we might recall that it was on this date 1397 that Geoffrey Chaucer “told” (read aloud) The Canterbury Tales for the first time at the court of Richard II.

220px-Canterbury_Tales
A woodcut from William Caxton‘s second edition of The Canterbury Tales, printed in 1483

source

“Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics?”*…

Half a century ago, Lewis Mumford developed a concept that explains why we trade autonomy for convenience…

… Surveying the state of the high-tech life, it is tempting to ponder how it got so bad, while simultaneously forgetting what it was that initially convinced one to hastily click “I agree” on the terms of service. Before certain social media platforms became foul-smelling swamps of conspiratorial misinformation, many of us joined them for what seemed like good reasons; before sighing at the speed with which their batteries die, smartphone owners were once awed by these devices: before grumbling that there was nothing worth watching, viewers were astounded by how much streaming content was available at one’s fingertips. Overwhelmed by the way today’s tech seems to be burying us in the bad, it’s easy to forget the extent to which tech won us over by offering us a share in the good — or to be more precise, in “the goods.” 

Nearly 50 years ago, long before smartphones and social media, the social critic Lewis Mumford put a name to the way that complex technological systems offer a share in their benefits in exchange for compliance. He called it a “bribe.” With this label, Mumford sought to acknowledge the genuine plentitude that technological systems make available to many people, while emphasizing that this is not an offer of a gift but of a deal. Surrender to the power of complex technological systems — allow them to oversee, track, quantify, guide, manipulate, grade, nudge, and surveil you — and the system will offer you back an appealing share in its spoils. What is good for the growth of the technological system is presented as also being good for the individual, and as proof of this, here is something new and shiny. Sure, that shiny new thing is keeping tabs on you (and feeding all of that information back to the larger technological system), but it also lets you do things you genuinely could not do before. For a bribe to be accepted it needs to promise something truly enticing, and Mumford, in his essay “Authoritarian and Democratic Technics,” acknowledged that “the bargain we are being asked to ratify takes the form of a magnificent bribe.” The danger, however, was that “once one opts for the system no further choice remains.” 

For Mumford, the bribe was not primarily about getting people into the habit of buying new gadgets and machines. Rather it was about incorporating people into a world that complex technological systems were remaking in their own image. Anticipating resistance, the bribe meets people not with the boot heel, but with the gift subscription.

The bribe is a discomforting concept. It asks us to consider the ways the things we purchase wind up buying us off, it asks us to see how taking that first bribe makes it easier to take the next one, and, even as it pushes us to reflect on our own complicity, it reminds us of the ways technological systems eliminate their alternatives. Writing about the bribe decades ago, Mumford was trying to sound the alarm, as he put it: “This is not a prediction of what will happen, but a warning against what may happen.” As with all of his glum predictions, it was one that Mumford hoped to be proven wrong about. Yet as one scrolls between reviews of the latest smartphone, revelations about the latest misdeeds of some massive tech company, and commentary about the way we have become so reliant on these systems that we cannot seriously speak about simply turning them off — it seems clear that what Mumford warned “may happen” has indeed happened…

Eminently worth reading in full: “The Magnificent Bribe,” by Zachary Loeb in @_reallifemag.

As to (some of) the modern implications of that bargain, see also Shoshana Zuboff‘s: “You Are the Object of a Secret Extraction Operation.”

As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information. The promise of the surveillance dividend now draws surveillance economics into the “normal” economy, from insurance, retail, banking and finance to agriculture, automobiles, education, health care and more. Today all apps and software, no matter how benign they appear, are designed to maximize data collection.

Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic. The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage…

And resonantly: “AI-tocracy” a working paper from NBER that links the development of artificial intelligence with the interests of autocracies: from the abstract:

Can frontier innovation be sustained under autocracy? We argue that innovation and autocracy can be mutually reinforcing when: (i) the new technology bolsters the autocrat’s power; and (ii) the autocrat’s demand for the technology stimulates further innovation in applications beyond those benefiting it directly. We test for such a mutually reinforcing relationship in the context of facial recognition AI in China. To do so, we gather comprehensive data on AI firms and government procurement contracts, as well as on social unrest across China during the last decade. We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats’ suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.

(And, Anne Applebaum warns, “The Bad Guys Are Winning.”)

* “Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics? The answer to this question is both paradoxical and ironic. Present day technics differs from that of the overtly brutal, half-baked authoritarian systems of the past in one highly favorable particular: it has accepted the basic principle of democracy, that every member of society should have a share in its goods. By progressively fulfilling this part of the democratic promise, our system has achieved a hold over the whole community that threatens to wipe out every other vestige of democracy.

The bargain we are being asked to ratify takes the form of a magnificent bribe. Under the democratic-authoritarian social contract, each member of the community may claim every material advantage, every intellectual and emotional stimulus he may desire, in quantities hardly available hitherto even for a restricted minority: food, housing, swift transportation, instantaneous communication, medical care, entertainment, education. But on one condition: that one must not merely ask for nothing that the system does not provide, but likewise agree to take everything offered, duly processed and fabricated, homogenized and equalized, in the precise quantities that the system, rather than the person, requires. Once one opts for the system no further choice remains. In a word, if one surrenders one’s life at source, authoritarian technics will give back as much of it as can be mechanically graded, quantitatively multiplied, collectively manipulated and magnified.”

– Lewis Mumford in “Authoritarian and Democratic Technics,” via @LMSacasas

###

As we untangle user agreements, we might recall that it was on this date in 1970 that Douglas Engelbart (see here, here, and here) was granted a patent (US No. 3,541,541) on the “X-Y Position Indicator for a Display System,” the world’s first prototype computer mouse– a wooden block containing the tracking apparatus, with a single button attached.

source

source

%d bloggers like this: