Posts Tagged ‘Gutenberg’
“It is what you read when you don’t have to that determines what you will be when you can’t help it”*…
… What we read– and, librarian Carlo Iacono argues, how we read.
Our inabilty to focus isn’t a failing. It’s a design problem, and the answer isn’t getting rid of our screen time…
Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span.
This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance.
I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong.
The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it…
[… observing that “people who ‘can’t focus’ on traditional texts can maintain extraordinary concentration when working across modes, he argues that “we haven’t become post-literate. We’ve become post-monomodal. Text hasn’t disappeared; it’s been joined by a symphony of other channels.”…]
… What troubles me most about the declinist position is not its diagnosis but its conclusion. The commentators who lament the post-literate society often identify the same villains I do. They recognise that technology companies are, in Marriott’s words, ‘actively working to destroy human enlightenment’, that tech oligarchs ‘have just as much of a stake in the ignorance of the population as the most reactionary feudal autocrat.’
And then they surrender. As Marriott says: ‘Nothing will ever be the same again. Welcome to the post-literate society.’
This is the move I cannot follow. To name the actors responsible and then treat the outcome as inevitable is to provide them cover. If the crisis is a force of nature, ‘screens’ destroying civilisation like some technological weather system, then there’s nothing to be done but write elegiac essays from a comfortable distance. But if the crisis is the product of specific design choices made by specific companies for specific economic reasons, then those choices can be challenged, regulated, reversed.
The fatalism, however beautifully expressed, serves the very interests it condemns. The technology companies would very much like us to believe that what they’re doing to human attention is simply the inevitable result of technological progress rather than something they’re doing to us, something that could, with sufficient political will, be stopped.
Your inability to focus isn’t a moral failing. It’s a design problem. You’re trying to think in environments built to prevent thinking. You’re trying to sustain attention in spaces engineered to shatter it. You’re fighting algorithms explicitly optimised to keep you scrolling, not learning.
The solution isn’t discipline. It’s architecture. Build different defaults. Create different spaces. Establish different rhythms. Make depth as easy as distraction currently is. Make thinking feel as natural as scrolling currently does.
What if, instead of mourning some imaginary golden age of pure text, we got serious about designing for depth across all modes? Every video could come with a searchable transcript. Every article could offer multiple entry points for different levels of attention. Our devices could recognise when we’re trying to think and protect that thinking. Schools could teach students to translate between modes the way they once taught translation between languages.
Books aren’t going anywhere. They remain unmatched for certain kinds of sustained, complex thinking. But they’re no longer the only game in town for serious ideas. A well-crafted video essay can carry philosophical weight. A podcast can enable the kind of long-form thinking we associate with written essays. An interactive visualisation can reveal patterns that pages of description struggle to achieve.
The future belongs to people who can dance between all modes without losing their balance. Someone who can read deeply when depth is needed, skim efficiently when efficiency matters, listen actively during a commute, and watch critically when images carry the argument. This isn’t about consuming more. It’s about choosing consciously.
We stand at an inflection point. We can drift into a world where sustained thought becomes a luxury good, where only the privileged have access to the conditions that enable deep thinking. Or we can build something unprecedented: a culture that preserves the best of print’s cognitive gifts while embracing the possibilities of a world where ideas travel through light, sound and interaction.
The choice isn’t between books and screens. The choice is between intentional design and profitable chaos. Between habitats that cultivate human potential and platforms that extract human attention.
The civilisations that thrive won’t be the ones that retreat into text or surrender to the feed. They’ll be the ones that understand a simple truth: every idea has a natural form, and wisdom lies in matching the mode to the meaning. Some ideas want to be written. Others need to be seen. Still others must be heard, felt or experienced. The mistake is forcing all ideas through a single channel, whether that channel is a book or a screen.
Your great-grandchildren won’t read less than you do. They’ll read differently, as part of a richer symphony of sense-making. Whether that symphony sounds like music or noise depends entirely on the choices we make right now about the shape of our tools, the structure of our schools, and the design of our days.
The elegant lamenters offer a eulogy. I’m more interested in a fight…
Reunderstanding reading: “Books and screens,” from @carloiacono.bsky.social in @aeon.co.
* Oscar Wilde
###
As we turn the page, we might note that we’ve been here before, and celebrate the emergence of a design, an innovation, a technology that took on a life of its own and changed reading and… well, everything: this day in 1455 is the traditionally-given date of the publication of the Gutenberg Bible, the first Western book printed from movable type.
(Lest we think that there’s actually anything new under the sun, we might recall that The Jikji— the world’s oldest known extant movable metal type printed book– was published in Korea in 1377; and that Bi Sheng created the first known moveable type– out of wood– in China in 1040.)

“The historian of science may be tempted to exclaim that when paradigms change, the world itself changes with them”*…
What we now call AI has gone through a series of paradigm shifts, and there appears to be no end in sight. Ashlee Vance shares an anecdote that suggests that AI might itself be an agent (perhaps the agent) of a broader paradigm shift (or shifts)…
AI madness is upon many of us, and it can take different forms. In August 2024, for example, I stumbled upon a post from a 20-year-old who had built a nuclear fusor [see here] in his home with a bunch of mail-ordered parts. More to the point, he’d done this while under the tutelage of Anthropic’s Claude AI service…
… The guy who built the fusor in question, Hudhayfa Nazoordeen, better known as HudZah on the internet, was a math student on his summer break from the University of Waterloo. I reached out and asked to see his experiment in person partly because it seemed weird and interesting and partly because it seemed to say something about AI technology and how some people are going to be in for a very uncomfortable time in short order.
A couple days after the fusor posts hit X, I showed up at Nazoordeen’s front door, a typical Victorian in San Francisco’s Lower Haight neighborhood. Nazoordeen, a tall, skinny dude with lots of energy and the gesticulations to match, had been crashing there for the summer with a bunch of his university friends as they tried to soak in the start-up and AI lifestyle. Decades ago, these same kids might have yearned to catch Jerry Garcia and The Dead playing their first gigs or to happen upon an Acid Test. This Waterloo set, though, had a different agenda. They were turned on and LLMed up.
Like many of the Victorian-style homes in the city, this one had a long hallway that stretched from the front door to the kitchen with bedrooms jutting off on both sides. The wooden flooring had been blackened in the center from years of foot traffic, but that was not the first thing anyone would notice. Instead, they’d see the mass of electrical cables that were 10-, 25- and sometimes 50-feet long and coming out of each room and leading to somewhere else in the house.
One of the cables powered a series of mind-reading experiments. Someone in the house, Nazoordeen said, had built his own electroencephalogram (EEG) device for measuring brain activity and had been testing it out on houseguests for weeks. Most of the cables, though, were there to feed GPU clusters, the computing systems filled with graphics chips (often designed by Nvidia) that have powered the recent AI boom. You’d follow a cable from one room to another and end up in front of a black box on the floor. All across San Francisco, I imagined, twenty-somethings were gathered around similar GPU altars to try out their ideas…
Vance tells HudZah’s story, recounts the building of his fusor, explains Claude’s (sometimes reluctant) role, and raises the all-too-legitimate safety questions the experiment raises… though in fairness, one might note that the web is rife with instuctions for building a fusor, e.g., here, here, and here, some of which encuraged HudZah.
But in the end, the takeaway for Vance was not the product, but the process…
I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.
HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.
It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.
I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way. Obviously, these situations follow every major technology transition, but I’m a very tech-forward person, and there were things HudZah could accomplish on his machine that gave off alien vibes to me. So, er, like, good luck if you’re not paying attention to this stuff.
After doing his AI and fusor show for me, HudZah gave me a tour of the house. Most of his roommates had already bailed out and returned to Canada. He was left to clean up the mess, which included piles of beer cans and bottles of booze in the backyard from a last hurrah.
The AI housemates had also left some gold panning equipment in a bathtub. At some point during the summer, they had decided to grab “a shit ton of sand from a nearby creek” and work it over in their communal bathroom for fun.
I’m honestly not sure what the takeaway there was exactly other than that something profound happened to the Bay Area brain in 1849, and it’s still doing its thing…
Goodbye, Digital Natives; hello, AI Natives: “A Young Man Used AI to Build A Nuclear Fusor and Now I Must Weep,” from @ashleevance. Eminently worth reading in full.
And for a look at one attempt to understand what may be the emerging new pardigm(s) of which AI may be a motive part, see Benjamin Bratton‘s explantion of the work he and his collegues are doing at a new institute at UCSD: “Antikythera.” See his recent Long Now Foundation talk on this same subject here.
On the other hand: “The Future Is Too Easy” (gift article) by David Roth in the always-illuminating Defector.
(Image above: source)
###
As we ponder progress, we might spare a thought for Johannes Gutenberg; he died on this date in 1416. A craftsman and inventor, he invented the movable-type printing press. (Though movable type was already in use in East Asia, Gutenberg’s invention of the printing press enabled a much faster rate of printing.)
The printing press spread across the world and led to an information revolution and the unprecedented mass-spread of literature throughout Europe. It was a profound enabler of the arts and the sciences of the Renaissance, of the Reformation (and Counter-Reformation), and of humanist movements… which is to say that it contributed to a series of pardigm shifts.
“Every great advance in science has issued from a new audacity of the imagination”*…
Itai Yanai and Martin Lercher on the importance of interdisciplinarity and creativity in science…
The hypothesis-testing mode of science, which François Jacob called “day science,” operates within the confines of a particular scientific field. As highly specialized experts, we confidently and safely follow the protocols of our paradigms and research programs . But there is another side of science, which Jacob called “night science”: the much less structured process by which new ideas arise and questions and hypotheses are generated. While day science is compartmentalized, night science is truly interdisciplinary. You may bring an answer from your home field to another discipline, or conversely, venturing into another field may let you discover a route towards answering a research question in your
main discipline. To be most creative, we may be best off cultivating interests in many areas, much like Renaissance thinkers such as Leonardo da Vinci or Galileo Galilei. But this creativity-enhancing interdisciplinarity comes at a price we may call “expert’s dilemma”: with your loss of status as a highly focused expert comes a loss of credibility, making it harder to get your work accepted by your peers. To resolve the dilemma, we must find our own balance between disciplinary day science expertise and interdisciplinary night science creativity…
Eminently worth reading in full: “Renaissance minds in 21st century science,” from @ItaiYanai and @MartinJLercher.
See also: “Night Science“
And for more: see their project’s home page and listen to their podcast.
Apposite: “8 lessons on lifelong learning from an astrophysicist,” from Ethan Siegel.
* John Dewey
###
As we find a balance, we might send easily-reproducible birthday greetings to a man who was moved by necessity to cross disciplinary boundaries, Alois Senefelder; he was born on this date in 1771. A playwright and actor, was having trouble getting his plays printed; he needed a less expensive and more efficient printing alternative to relief printed hand set type or etched plates. So he invented the technique we call lithography– the biggest revolution in the printing industry since Gutenberg’s movable type.
The principle is simple: oil-based printing ink and water repel each other. The image is drawn on a stone (Bavarian limestone for Senefelder) with greasy crayon, after which the stone is soaked in water, which is absorbed into the part of the stone not covered in greasy paint. The ink is rolled onto the stone. The image areas of the stone accept ink and undrawn areas reject it. Finally, a piece of paper is pressed onto the stone, and the ink transfers onto the paper from the stone.
Senefelder called the technique “stone printing” or “chemical printing,” but the French name “lithography” became more widely adopted. Today photo lithography is used to print magazines and books, but the original process of drawing by hand on litho stones still exists in the fine art world.

“What is an anarchist? One who, choosing, accepts the responsibility of choice.”*…
Per the Oxford Dictionaries, “anarchy” has two meanings:
1. a state of disorder due to absence or nonrecognition of authority or other controlling systems.
2. the organization of society on the basis of voluntary cooperation, without political institutions or hierarchical government; anarchism.
It’s fair to observe that, in common parlance, it’s the first definition that rules. The estimable Alan Jacobs puts in a word for the second, and positions it as something beyond the political, something spiritual…
Perhaps the most unusual element of my 2022 essay on anarchism is this: I present anarchism not as a political system but as a spiritual discipline. I don’t put the point quite that bluntly, but I come fairly close:
The first target of anarchistic practice ought to be whatever it is in me that resists anarchy — what resists negotiation, the turning toward the Other as neighbor and potential collaborator. I return to Odo’s line, “What is an anarchist? One who, choosing, accepts the responsibility of choice,” but I add this: The responsibility of choice arises when I acknowledge my own participation, in a thousand different ways, in the imposition of order on others. This is where anarchism begins; where the turning aside from the coldest of all cold monsters begins; where I begin. The possibility of anarchic action arises when I acknowledge my own will to power...
It should be obvious that if you are delighted with power politics – if you think the purpose of politics is “defeating the enemy and enjoying the spoils” of your victory – then you won’t be worried about your own will to power. You can just turn off your conscience and go on the attack, thinking only about winning (good) and losing (bad). My suggestion that the desire to impose order on others is a desire that needs to be reflected on will seem obviously silly to you. But there’s another way of thinking about the political order that is equally incompatible with the kind of reflection I counsel in that essay: the libertarian model.
Libertarianism doesn’t want to impose order on others, but its most passionate advocates have a strong tendency to assess existence in terms of winning and losing – winning and losing not in the corridors of political power but in the marketplace; the individual entrepreneur controlling the segment of the market in which he works. As Mark Zuckerberg likes to say, it’s all about DOMINATION; just not domination by law. Anarchism, by contrast — this is my argument in that essay — stands between (libertarian) chaos and (seeking to become) the Man. Some of the most thoughtful anarchists like to say that “anarchy is order” – but order that emerges from collaboration and cooperation rather than being imposed by governmental power. I don’t think it’s possible to create an anarchist system, because an anarchism imposed on people by those in power isn’t anarchism.
Here’s what I think can be done: Try, in every way we can think of, to increase the number of situations in our lives in which we are neither dehumanized by an omnipotent state nor engaged in ceaseless competition with one another in an omnipotent marketplace. As Wendell Berry has written, “Rats and roaches live by competition under the law of supply and demand; it is the privilege of human beings to live under the laws of justice and mercy.” We should assume that privilege whenever we can, and take it upon ourselves as a collaborative of equals to determine what, in any given case facing us, justice and mercy are. In other words, what I call the anarchic imperative is an attempt to rebalance what Berry has called “the two economies”:
For the thing that so troubles us about the industrial economy is exactly that it is not comprehensive enough, that, moreover, it tends to destroy what it does not comprehend, and that it is dependent upon much that it does not comprehend. In attempting to criticize such an economy, it is probably natural to pose against it an economy that does not leave anything out. And we can say without presuming too much, that the first principle of the kingdom of God is that it includes everything; in it the fall of every sparrow is a significant event. We are in it, we may say, whether we know it or not, and whether we wish to be or not. Another principle, both ecological and traditional, is that everything in the kingdom of God is joined both to it and to everything else that is in it. That is to say that the kingdom of God is orderly.
Amen to that. But what is the nature of that order? Eschatologically, it certainly ain’t anarchic: it is the kingdom of the archē, the source of all things, the Lord. But to understand and instantiate that Kingdom here and now – when, as St. Augustine says, the City of God and the City of Man are inevitably and confusingly mixed – we need to collaborate with one another to increase both our knowledge and our ability to act effectively.
I have argued at some length that Christians aren’t pluralists – we believe that “at the name of Jesus every knee will bow” (Phil. 2:10) – but in our current position we should expect, accept, and even embrace plurality. We need to cultivate the virtues appropriate to a plural world, and we can do that by expanding the sphere of voluntary collaboration, negotiation among equals, emergent order, even when such expansion makes life more difficult for us. That’s anarchism as a spiritual discipline…
Charting a course between libertarianism and autocracy: “Anarchism as a spiritual discipline.”
[The image above is from Jacobs’ Harpers essay— eminently worth reading]
* Ursula K. Le Guin, who created Odo (and Odoism) in The Dispossessed
###
As we choose, we might recall that it was (probably) on this date that the first edition of what we know as the Gutenberg Bible was published.
While many believe that Johannes Gutenberg first work using moveable type was the Bible, it was probably the second or maybe even third. [Indeed, there was an earlier (32 line) version of the parts of the Bible, labeled an “indulgence.”] The Gutenberg press was in operation by 1450, and it is known that a German poem had been printed before the Bible. However, it is known that Gutenberg began the painstaking process of hand placing every letter for every page of the new Bible during that same year. It is believed that the 42-line Gutenberg Bible [the one we know as “The Gutenberg Bible”] was completed on this day in 1456. About 180 copies of the book were printed, which seems rather small for a first edition…








You must be logged in to post a comment.