(Roughly) Daily

Posts Tagged ‘Humanism

“Attention is the rarest and purest form of generosity”*…

An illustration depicting a large black fish with an open mouth, consuming smaller red fish, accompanied by the text 'what price media consolidation?'

… But that most valuable of gifts is being hijacked, subverted/converted into a commodity, and used to mold not just consumer behavior, but society-as-a-whole. We live in an attention economy, and its media/tech ownership landscape is becoming ever more consoldiated.

Kyla Scanlon unpacks the way in which concentrated ownership of media and tech and their automated manipulation reshape democracy…

It’s nearly impossible not to get lost in the news right now. I was at a wedding last week, and every conversation eventually drifted back to the same subject: the World We Are in and All That is Happening. The ground feels like it’s moving faster than anyone can feasibly keep up with.

Some people think the shift is progress. Others see collapse. Either way, the line between digital and physical life is increasingly blurry. What happens online is real life. What we consume is what we become.

Plenty of thinkers have circled this before – Postman, Debord, Huxley, Orwell on media; Machiavelli, Tocqueville, Thucydides, Gibbon on human corruptibility during times of uncertainty. The convergence of endless information and a ragebait economy creates the perfect environment for splintering how we understand the world and how we understand each other.

The deeper problem is this: we no longer trust institutions to provide truth, fairness, or mobility. Once, they were scaffolding that helped us climb from raw data to wisdom. And when that scaffolding gives out, people adapt: some over-perform in the status race (because you have to) and others defect from obligations altogether (why would I work for institutions if they don’t work for me).

There are a few ways to picture our distorted information ecosystem.

  • The DIKW Pyramid (Data → Information → Knowledge → Wisdom): raw posts and clicks at the bottom, trending content in the middle, shared truths above that, and finally wisdom, the rare ability to see causes instead of just symptoms.
  • Or the Ladder of Inference: we start with data, add meaning, make assumptions – and our beliefs tend to affect what data we select. Bots and algorithms hijack that ladder, nudging us toward polarized beliefs before we realize what’s happening.

Taken together, we can combine them into what we might call a hierarchy of information:

  • Raw data: the endless stream of posts, likes, bot spam
  • Information: headlines, hashtags, trending things
  • Knowledge: the narratives we share and fight over.
  • Understanding: recognizing what might not be real (or is hyperreal)
  • Wisdom: systemic analysis, the ability to see causes instead of just symptoms.

Right now, we’re stuck sloshing around in the middle layers of the hierarchy: drowning in outrage, fighting over partisan hot takes, rarely reaching understanding, almost never wisdom.

Chaos always has an architect. And if we want to make sense of American democracy today, we need to understand who those architects are, and how they profit from confusion.

This polarization rests on media concentration.The Telecommunications Act of 1996 was sold as a way to increase competition in media and telecommunications, but in reality, it did quite the opposite. Within five years, four firms controlled ~85% of US telephone infrastructure. That deregulated spine carried today’s consolidation of the entire media environment – not just telephones. Newspapers. Social media. TV stations.

We have the increasing concentration of media ownership, the financialization of attention, and the transformation of information from a public good into a private commodity to be bought, sold, and manipulated…

[Scanlon characterizes and explains the concentration, examines its impacts, and unpacks the roles of bots…]

When manufacturing consensus is both cheap to produce and valuable to those who benefit from confusion, you get industrial-scale manipulation.

Truth becomes whatever can capture the most attention in the shortest amount of time. Traditional journalism, with its slow fact-checking and institutional processes, can’t compete with bot-amplified outrage. Democratic deliberation, which requires shared facts and good faith dialogue, becomes nearly impossible when the information environment is designed to maximize conflict.

We’re living in a speculation economy where perception drives value more than fundamentals. Look at the stock market: Nvidia gained $150 billion in value based the back of a $100 billion OpenAI investment (which OpenAI will use to buy more Nvidia chips). Ten companies pass hundreds of billions back and forth, and the S&P jumps like it’s measuring something real.

It’s all memes wearing suits. Meme stocks and Dogecoin at least looked like jokes; now the same speculative energy runs through the corporate core. Attention, perception, and narrative drive valuation more than production or profit.

We’ve built a world where the hierarchy of information has flipped upside down.

At the bottom, bots flood us with raw noise. In the middle, outrage and team narratives harden into “knowledge.” At the top, the ladders to wisdom like journalism, schools, civic discourse, shared institutions are weakened. The scaffolding that once helped us climb no longer holds.

The traditional solutions – fact-checking, media literacy, content moderation – assume we’re dealing with a content problem when we’re actually facing an infrastructure problem. You can’t fact-check your way out of a system designed to reward misinformation. You can’t educate your way around algorithms optimized for polarization. You can’t moderate your way past economic incentives that make confusion profitable.

Recognizing this as a market structure problem rather than an information problem changes everything. Instead of focusing on individual bad actors or specific false claims, you start thinking about the underlying systems that make manipulation both profitable and scalable.

The information wars are economic policy, determining how we allocate attention, structure incentives, and organize the flow of information that shapes every other market and political decision we make. I don’t think it’s useful to get on a Substack soapbox about this – but we need to take (1) the power of media seriously and (2) those trying to influence it extremely seriously. There is a way to get to the top of the information hierarchy! We don’t have to be stuck in these middle layers…

Follow the money: “Who’s Getting Rich Off Your Attention?” from @kyla.bsky.social

For more on how the Telecommunications Act of 1996 helped set all of this in motion, see: “On Jimmy Kimmel: It’s Time to Destroy the Censorship Machine and Repeal the Telecommunications Act of 1996” from @matthewstoller.bsky.social.

For more on thoughts on why companies are behaving in the ways they are: “Why Corporate America Is Caving to Trump” and “Media consolidation is shaping who folds under political pressure — and who could be next.”

And lest we think that this came out of nowhere: “David Foster Wallace Tried to Warn Us About these Eight Things.”

[Image above: source]

Simone Weil

###

As we reclaim recognition, we might recall that on this date in 1452 an earlier information revolution began: Johannes Gutenberg started work on his Bible (which was completed and published in 1455). An inventor and craftsman, Gutenberg created the movable-type printing press, enabling a much faster (and cheaper) printing process. (Movable type was already in use in East Asia, but was slower and used for smaller jobs.) His Bible was his first major work, and his most impactful.

The printing press later spread across the world, leading to an information revolution– the unprecedented mass-spread of literature throughout Europe. It had a profound impact on the development of the Renaissance, Reformation, and Humanist movements.

A close-up view of an open Gutenberg Bible displayed in a museum, showcasing text on aged paper and illustrating the early printing technique.
Gutenberg Bible in the New York Public Library (source)

“The great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing.”*…

Data storyteller RJ Andrews demonstrates…

How do creatives – composers, painters, writers, scientists, philosophers – find the time to produce their opus?

Each routine day is represented as a continuous 24 hour cycle. Midnight is placed at 12 o’clock position and noon at 6 o’clock. Colors mark major categories of activity including work, sleep, and exercise…

The daily rituals of great creators: “Creative Routines,” from @infowetrust.

* Mason Curry, Daily Rituals: How Artists Work (the source of much of the data that informed the gaphics above)

###

As we contemplate cultivating customs, we might send learned birthday greetings to Desiderius Erasmus Roterodamus, better known simply as Erasmus; he was born on this date in 1466.  A Catholic priest, social critic, teacher, translator, and theologian, probably best remembered for his book In Praise of Folly, he was the greatest scholar of the northern Renaissance, the first editor of the New Testament (“Do unto others…”), and an important figure in patristics and classical literature. 

Erasmus had contrasting experiences of routine: on being orphaned, Erasmus was sent to a series of monastic or semi-monastic schools, which he despised both for their discipline and for their disdain of inquiry. Graduating with few prospects, he joined an Augustinian monastery where he considered his superiors “barbarians” discouraging his classical studies. On ordination, he escaped– and began a career that began in struggle (as he balanced the demands of study with those of serving as a Clerk, and Priest, a tutor… all while trying to distinguish himself as a poet.

His luck changed in 1499, when he connected with a reformist English circle (notably John Colet and Thomas More), then with radical French Franciscan Jean Vitrier, and later with the Aldine New Academy in Venice… which led, in the Reformation, to his emergence as a prime influencer of European thought. Among fellow scholars and philosophers of that era he was– and is still– known as the “Prince of the Humanists.”

Erasmus’ views were contentious on his time and elicited a good bit of criticism. While we don’t know too much about his daily routine, we do know that for the last half of his life it included enough time on a regular basis to read these attacks and to prepare and publish apologetic works in his own defense, in many cases leading to a long series of back-and-forth polemical books… kind of like Twitter feud, but at the speed of Gutenberg.

Portrait of Erasmus of Rotterdam (1523) by Hans Holbein the Younger

source

“Sometimes we drug ourselves with dreams of new ideas”*…

Further to last week’s piece on Samuel Arbesman‘s “incremental humanism,” Jennifer Banks unpacks the differences between the two leading “flavors” of humanism afoot today: one akin to Arbesman’s; the other, not so much…

In 2003, Edward Said wrote in the wake of the terrorist attacks of 11 September 2001 and in the context of the United States’ war on terror that ‘humanism is the only, and, I would go so far as saying, the final, resistance we have against the inhuman practices and injustices that disfigure human history.’ The moment, he felt, was ‘apocalyptic’, and the end was indeed near for him; he died of leukaemia later that year.

So why was it humanism that he held to so tightly as war and sickness cinched time’s horizon around him? Humanism, an intellectual and cultural movement that emerged in Renaissance Europe emphasising classical learning and affirming human potential, had been subject to decades of critique by the time Said was writing this. Among its many detractors were postcolonialists who argued that humanism’s elevation of a particular kind of human – Eurocentric, rational, empiricist, self-realising, secular and universal – had provided thin cover for the exploitation of large swaths of the world’s population.

But Said, one of the founders of postcolonial studies, hadn’t given up on the term, despite its imperialist entanglements. He imagined a humanism abused but not exhausted, an –ism more elastic and plural, more subject to critique and revision, and more acquainted with the limits of reason than many humanisms have historically been. Humanism, he argued, was more like an ‘exigent, resistant, intransigent art’ – an art that was not, for him, particularly triumphant. His humanism was defined by a ‘tragic flaw that is constitutive to it and cannot be removed’. It refused all final solutions to the irreconcilable, dialectical oppositions that are at the heart of human life – a refusal that ironically kept the world liveable and the future open.

At stake in his defence was not only the survival of the humanistic fields of study he had devoted his academic career to, but the survival, freedom and thriving of actual people, including those populations that humanisms had historically excluded. Various antihumanisms had gradually been eroding humanism’s stature within the academy, but it was humanism, he believed, with its positive ideas about liberty, learning and human agency – and not antihumanist deconstructions – that inspired people to resist unjust wars, military occupations, despotism and tyranny.

Humanism, however, fell further out of vogue in the two decades that followed. Humanities enrolments dropped dramatically at universities, and funding for departments like comparative literature, women’s studies, religion, and foreign languages got slashed. Increasingly, however, it wasn’t just the inadequacies of any –ism that were the problem. It was the subject at the heart of humanism that came under widespread attack: the human itself. Given that history could be read as a catalogue of human greed, blindness, exclusions and violence, the future seemed to belong to someone – or something – else. The humane in humanism seemed to be missing. Alternative ideologies like antihumanism, transhumanism, posthumanism and antinatalism seeped from the fringes into the mainstream, buoyed by their conviction that they might offer the planet or even the cosmos something more ethical, more humane even, than humans have ever been able to. Humanity’s time, perhaps, was simply up.

In his book The Revolt Against Humanity: Imagining a Future Without Us (2023), the American critic Adam Kirsch identifies the contested line between humanists and non-humanists as one of the defining faultlines of our political and cultural moment. The debates between them can feel merely semantic, the stuff of graduate seminars, but the revolt against humanity is likely to have major implications for our future, Kirsch argues, even if its prophecies about our imminent extinction don’t come true. ‘[D]isappointed prophecies,’ he writes, ‘have been responsible for some of the most important movements in history, from Christianity to Communism.’ Anyone committed to the prospect of a liveable future should pay close attention to what’s going on here.

I might have never put too much stock in a term like humanism if I had not read around in the transhumanist literature. I came to this work while researching a book on birth that explored the relationship between birth, death and the question of a human future. Does humanity have a future? Do we deserve one? What will that future look like? The answers to those questions will be determined by many forces – technological, economic, political, environmental and more – but also by how we experience and think about our own births and deaths. Despite large areas of convergence, humanists and transhumanists can end up with wildly different visions of our future, based on dramatically different understandings of birth and death, as one can see by comparing how a novelist (Toni Morrison) and a philosopher (Nick Bostrom) have explored these themes. Morrison offers us a prophetic celebration of Earthly, ongoing, biological generation and a future that allows for human freedom, while Bostrom points us toward a highly controlled surveillance world order, organised around a paranoid fear of human action, and oriented toward the pristine emptiness of outer space. Which future, we should ask ourselves, would we willingly choose?

Do read on for her analysis: “What awaits us?“, from @jenniferabanks in @aeonmag.

Apposite: “The Philosophy Of Co-Becoming” from @NoemaMag and “To pay attention, this is our endless and proper work,” @LMSacasas on Illich.

Audre Lorde

###

As we ponder possibility, we might spare a thought for Dandara, “the Warrior Queen” of the Quilombo dos Palmares, a settlement of Afro-Brazilian people who freed themselves from enslavement during Brazil’s colonial period. She was captured by colonial authorities on this date in 1694 and committed suicide rather than be returned to a life of slavery.

source

“Crossing the river by feeling the stones”*…

How to live in our complex world? Samuel Arbesman on Incremental Humanism…

… there is a decent amount of contingency in the paths that technological innovation take:

…if we replayed the tape of human history, we would find that the sequence, timing, and (sometimes significant) details of inventions could be quite different, but that the main technological paradigms we discovered would also be discovered there. We would find steam power, electricity, plastics, and digital computers. But we wouldn’t find qwerty keyboards; we might not find keyboards at all. It’s tough to quantify this kind of thing in any meaningful way, and of course we can never know for sure, but my suspicion is that the technology of an alternate history of humans would look about as different from our own as the flora and fauna of Central Asia look from the flora and fauna of the central USA.

So when it comes to innovation, we forever live behind a Veil of Progress. This Veil prevents us from not only understanding the possible positive visions of the future that might win out, but even grasping how different technologies might recombine for further innovation. There is a certain fogginess towards the innovative future that we live within…

As per Kenneth Stanley and Joel Lehman in their book Why Greatness Cannot Be Planned… in a high-dimensional search space, aiming towards an objective will not work. Instead, it is best to develop novel stepping stones that can be productively recombined. This expanding of the adjacent possible is a much more effective strategy.

So how should we operate if we are constantly living behind the Veil of Progress? It requires humility and incremental tinkering.

The idea of humanism consists of, according to Sarah Bakewell, “free thinking, inquiry and hope.” But there are also other facets, from a sensibility of moderation, to a focus on improving the world.

I think incrementalism is also a key feature of humanism. As Adam Gopnik noted in his book A Thousand Small Sanities about liberalism: “Whenever we look at how the big problems got solved, it was rarely a big idea that solved them. It was the intercession of a thousand small sanities.”

This approach, of incremental humanism, is also a necessary part of the ideals of progress. Imagining a better future and incrementally improving towards this, even in an undirected manner, is the way of managing the veil of progress. As Rabbi Tarfon noted in the Talmud, “It is not your duty to finish the work, but neither are you at liberty to neglect it.” We are part of a long chain of improvements, all part of a tech tree that we can’t see and which involves a balance of innovation and maintenance (for we must preserve what we already have if we hope to be able to build on what has come before us). Revolution is the quick bandage that sounds appealing, but don’t be led to think it will necessarily result in enduring change. Big ideas can be seductive, but incremental change is the only way to live under uncertainty.

Living in a complex world where one’s impact is difficult to fully know requires an incremental humanism. This means having a vision of the future, but a more gradual and piecemeal one. This also means having a certain amount of long-term humility…

How to face the future: “Living with the Veil of Progress,” from @arbesman.

Chen Yun, via Deng Xiaoping

###

As we feel our ways, we might recall that it was on this date in 1961 that Decca Record released “I Fall to Pieces,” written by Hank Cochran and Harlan Howard and performed by the inimitable Patsy Cline. It started slow, but became Billboard‘s “Song of the Year” and has since, of course, become a classic.

Written by (Roughly) Daily

January 30, 2024 at 1:00 am

“There’s no idea in economics more beautiful than Arrow’s impossibility theorem”*…

Tim Harford unpack’s Kenneth Arrow‘s Impossibility Theorem (which feels a bit like a socio-economic “Monty Hall Problem“) and considers it’s implications…

… if any group of voters gets to decide one thing, that group gets to decide everything, and we prove that any group of decisive voters can be pared down until there’s only one person in it. That person is the dictator. Our perfect constitution is in tatters.

That’s Arrow’s impossibility theorem. But what does it really tell us? One lesson is to abandon the search for a perfect voting system. Another is to question his requirements for a good constitution, and to look for alternatives. For example, we could have a system that allows people to register the strength of their feeling. What about the person who has a mild preference for profiteroles over ice cream but who loathes cheese? In Arrow’s constitution there’s no room for strong or weak desires, only for a ranking of outcomes. Maybe that’s the problem.

Arrow’s impossibility theorem is usually described as being about the flaws in voting systems. But there’s a deeper lesson under its surface. Voting systems are supposed to reveal what societies really want. But can a society really want anything coherent at all? Arrow’s theorem drives a stake through the heart of the very idea. People might have coherent preferences, but societies cannot…

On choice, law, and the paradox at the heart of voting: “Arrow’s Impossibility Theorem,” from @TimHarford in @WhyInteresting. Eminently worth reading in full.

* Tim Harford

###

As we contemplate collective choice, we might send grateful birthday greetings to the man who “wrote the book” on perspective, Leon Battista Alberti; he was born on this date in 1404.  The archetypical Renaissance humanist polymath, Alberti was an author, artist, architect, poet, priest, linguist, philosopher, cartographer, and cryptographer.  He collaborated with Toscanelli on the maps used by Columbus on his first voyage, and he published the the first book on cryptography that contained a frequency table.

But he is surely best remembered as the author of the first general treatise– Della Pictura (1434)– on the the laws of perspective, which built on and extended Brunelleschi’s work to describe the approach and technique that established the science of projective geometry… and fueled the progress of painting, sculpture, and architecture from the Greek- and Arabic-influenced formalism of the High Middle Ages to the more naturalistic (and Latinate) styles of Renaissance.

from Della Pictura

 source

 source