Posts Tagged ‘education’
“You live and learn. At any rate, you live.”*…
… and to the extent that we care about our democracy, that’s an issue.
In an article based on his recent Sakurada-Kai Foundation Oxbridge Lecture at Keio University, Tokyo, John Dunn argues that our democracies depend on our picking up the pace of learning. The abstract:
There cannot be a coherent democratic theory because democracy is not a determinate topic. Representative democracy is a relatively modern regime form. It now needs rehabilitation because so many instances have performed poorly for so long. Representative democracy is now also an aging regime. As a type of state, it is subject to the territorial contentiousness and contested legitimacy of any state. It claims its legitimacy from iterative popular choice, but the plausibility of that claim is increasingly strained by the drastic disparities in life chances reproduced through the property systems it protects. The inherent difficulty for citizens to judge how to advance their collective interests is aggravated by the recent transformation of the information economy. In the cumulative damage inflicted by climate change it faces a deadlier peril than any previous regime and one which only a citizenry that can enlighten itself in time can reasonably hope to nerve itself to meet…
There follows a fascinating– and provocative– elaboration of this thesis in which Dunn considers the history of democracy and the alternatives with which it has, since its inception, vied. He concludes in a bracing fashion…
… The varieties of autocracy which will be on offer wherever the rest of the world has the opportunity to take them up will be without exception the reverse of enlightened – instrumentally and compulsively bound to the extremes of obscurantism, Darkness as a full-on fideist commitment, deliberate self-blinding as a navigational strategy. Move fast, break lots, and never pause to inspect the wreckage.
Representative democracy has recently proved itself a poor structure for collective enlightenment, but the case for it depends on its at least not precluding that, its being still open to making the attempt, and responding to what it can contrive to learn. The most optimistic vision of democracy in action has always seen it as an opportunity for collective self-education on the content of shared goods and the means to achieve them. If that is scarcely a realist picture of what it has ever been, at least it is an image of the right shape. It is too late to ask who will educate the educators. At this point we must educate ourselves together and heed the lessons of that education or we must and will die – not just each of us one by one, as we were always fated to do, but soon enough all of us and for ever…
Eminently worth reading in full: “Can Democracy be Rehabilitated?“
Apposite: “How American Democracy Fell So Far Behind,” from Steven Levitsky and Daniel Ziblatt (gift article– and source of the image above)
* Douglas Adams, Mostly Harmless
###
As we devote ourselves to democracy, we might spare a thought for Ludwig van Beethoven; he died on this date in 1827. A crucial figure in the transition between the Classical and Romantic eras in Western music, he remains one of the most famous and influential of all composers. His best-known compositions include 9 symphonies, 5 concertos for piano, 32 piano sonatas, and 16 string quartets. He also composed other chamber music, choral works (including the celebrated Missa Solemnis), a single opera (Fidelio), and numerous songs.
Relevantly to the piece above…
Beethoven admired the ideals of the French Revolution, so he dedicated his third symphony to Napoleon Bonaparte… until Napoleon declared himself emperor. Beethoven then sprung into a rage, ripped the front page from his manuscript and scrubbed out Napoleon’s name…
Beethoven’s temper and Symphony No. 3 ‘Eroica’

“It is what you read when you don’t have to that determines what you will be when you can’t help it”*…
… What we read– and, librarian Carlo Iacono argues, how we read.
Our inabilty to focus isn’t a failing. It’s a design problem, and the answer isn’t getting rid of our screen time…
Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span.
This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance.
I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong.
The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it…
[… observing that “people who ‘can’t focus’ on traditional texts can maintain extraordinary concentration when working across modes, he argues that “we haven’t become post-literate. We’ve become post-monomodal. Text hasn’t disappeared; it’s been joined by a symphony of other channels.”…]
… What troubles me most about the declinist position is not its diagnosis but its conclusion. The commentators who lament the post-literate society often identify the same villains I do. They recognise that technology companies are, in Marriott’s words, ‘actively working to destroy human enlightenment’, that tech oligarchs ‘have just as much of a stake in the ignorance of the population as the most reactionary feudal autocrat.’
And then they surrender. As Marriott says: ‘Nothing will ever be the same again. Welcome to the post-literate society.’
This is the move I cannot follow. To name the actors responsible and then treat the outcome as inevitable is to provide them cover. If the crisis is a force of nature, ‘screens’ destroying civilisation like some technological weather system, then there’s nothing to be done but write elegiac essays from a comfortable distance. But if the crisis is the product of specific design choices made by specific companies for specific economic reasons, then those choices can be challenged, regulated, reversed.
The fatalism, however beautifully expressed, serves the very interests it condemns. The technology companies would very much like us to believe that what they’re doing to human attention is simply the inevitable result of technological progress rather than something they’re doing to us, something that could, with sufficient political will, be stopped.
Your inability to focus isn’t a moral failing. It’s a design problem. You’re trying to think in environments built to prevent thinking. You’re trying to sustain attention in spaces engineered to shatter it. You’re fighting algorithms explicitly optimised to keep you scrolling, not learning.
The solution isn’t discipline. It’s architecture. Build different defaults. Create different spaces. Establish different rhythms. Make depth as easy as distraction currently is. Make thinking feel as natural as scrolling currently does.
What if, instead of mourning some imaginary golden age of pure text, we got serious about designing for depth across all modes? Every video could come with a searchable transcript. Every article could offer multiple entry points for different levels of attention. Our devices could recognise when we’re trying to think and protect that thinking. Schools could teach students to translate between modes the way they once taught translation between languages.
Books aren’t going anywhere. They remain unmatched for certain kinds of sustained, complex thinking. But they’re no longer the only game in town for serious ideas. A well-crafted video essay can carry philosophical weight. A podcast can enable the kind of long-form thinking we associate with written essays. An interactive visualisation can reveal patterns that pages of description struggle to achieve.
The future belongs to people who can dance between all modes without losing their balance. Someone who can read deeply when depth is needed, skim efficiently when efficiency matters, listen actively during a commute, and watch critically when images carry the argument. This isn’t about consuming more. It’s about choosing consciously.
We stand at an inflection point. We can drift into a world where sustained thought becomes a luxury good, where only the privileged have access to the conditions that enable deep thinking. Or we can build something unprecedented: a culture that preserves the best of print’s cognitive gifts while embracing the possibilities of a world where ideas travel through light, sound and interaction.
The choice isn’t between books and screens. The choice is between intentional design and profitable chaos. Between habitats that cultivate human potential and platforms that extract human attention.
The civilisations that thrive won’t be the ones that retreat into text or surrender to the feed. They’ll be the ones that understand a simple truth: every idea has a natural form, and wisdom lies in matching the mode to the meaning. Some ideas want to be written. Others need to be seen. Still others must be heard, felt or experienced. The mistake is forcing all ideas through a single channel, whether that channel is a book or a screen.
Your great-grandchildren won’t read less than you do. They’ll read differently, as part of a richer symphony of sense-making. Whether that symphony sounds like music or noise depends entirely on the choices we make right now about the shape of our tools, the structure of our schools, and the design of our days.
The elegant lamenters offer a eulogy. I’m more interested in a fight…
Reunderstanding reading: “Books and screens,” from @carloiacono.bsky.social in @aeon.co.
* Oscar Wilde
###
As we turn the page, we might note that we’ve been here before, and celebrate the emergence of a design, an innovation, a technology that took on a life of its own and changed reading and… well, everything: this day in 1455 is the traditionally-given date of the publication of the Gutenberg Bible, the first Western book printed from movable type.
(Lest we think that there’s actually anything new under the sun, we might recall that The Jikji— the world’s oldest known extant movable metal type printed book– was published in Korea in 1377; and that Bi Sheng created the first known moveable type– out of wood– in China in 1040.)

“Tell me to what you pay attention and I will tell you who you are”*…

Before the attention economy consumed our lives, “pursuit tests” devised by the US military coupled man to machine with the aim of assessing focus under pressure. D. Graham Burnett explores these devices for evaluating aviators, finding a pre-history of the laboratory research that has relentlessly worked to slice and dice the attentional powers of human beings…
We worry about our attention these days — nearly all of us. There is something. . . wrong. We cannot manage to do what we want to do with our eyes and minds — not for long, anyway. We keep coming back to the machines, to the screens, to the notifications, to the blinking cursor and the frictionless swipe that renews the feed.
An ethnographer from Mars, moving among us (would we even notice?), might have trouble understanding our complaint: “Trouble with their attention? They stare at small slabs of versicolor glass all day! Their attentive powers are. . . sublime!”
And that misunderstanding rather sharpens the point: we don’t have any problem at all with the forms of attention that involve remaining engaged with, and responsive to, machines. We are amazing at the click and tap of durational vigilance to this or that stimulus, presented at the business end of a complex device. Our uncanny and immersive cybernetic attention is a defining characteristic of the age. Our human attention — our ability to be with ourselves and with others, our ability to receive the world with our minds and senses, our ability to daydream, read a book uninterrupted, or watch a sunset — well, many of us are finding it increasingly difficult to remember what that might even mean.
This isn’t really an accident. Over the last century or so, a series of elaborate programs of laboratory research have worked to slice and dice the attentional powers of human beings. Their aim? To understand the operational capacities of those who would be asked to shoot down airplanes, monitor radar screens, and otherwise sit at the controls of large and expensive machines. Seated in front of countless instruments, experimental subjects were asked to listen and look, to track and trigger. Psychologists stood by with stopwatches, quantifying our cybernetic capacities, and seeking ways to extend them. For those of us who have come of age in the fluorescence of the “attention economy”, it is interesting to look back and try to catch glimpses of the way that the movement of human eyeballs came under precise scrutiny, the way that machine vigilance became a field of study. We know now that the mechanomorphic attention dissected in those laboratories is the machine attention that is relentlessly priced in our online lives — to deleterious effects.
You could say that this process began with the fascinating and now mostly forgotten tool known as the “pursuit test”. Part steampunk videogame, part laboratory snuff-flick, the pursuit test staged and restaged the integration of man and machine across the first decades of the twentieth century…
Fascinating– and timely: “Cybernetic Attention– All Watched over by Machines We Learned to Watch,” from @publicdomainrev.bsky.social. Eminently worth reading in full.
* Jose Ortega y Gasset
###
As we untangle engagement, we might send thoughtful birthday greetings to a man whose work influenced the endeavors described in the piece featured above, Hermann Ebbinghaus; he was born on this date in 1850. A psychologist, he pioneered the experimental study of memory and discovered the learning curve, the forgetting curve, and the spacing effect.
“It’s difficult to make predictions, especially about the future”*…
It’s that time of year: predictions and forecasts and outlooks for 2026 on just about everything are everywhere. Scott Belsky‘s list is eminently worth a read…
From talent arbitrage and “proof of craft” to hardware moats, ambient listening, homegrown software, and the end of waste – what should we expect to see in the coming year? What are the implications?…
“12 Outlooks for the Future: 2026+”
For a bracing list of “black swan” possibliities in the new year, see “15 Scenarios That Could Stun the World in 2026.”
But in the interest of starting this year on as positive a note as possible: “1,084 Reasons the World Isn’t Falling Apart.”
* an axiom attributed to Niels Bohr and Yogi Berra, among others
###
As we contemplate what’s coming, we might recall that it was on this date in 1902 that Andrew Carnegie filed the incorporation papers for what he called the Carnegie Institution of Washington– which we now know as Carnegie Science. The first of 20 not-for-profit institutions he founded (in addition to his other philanthropy, e.g., funding over 3,000 public libraries), Carnegie Science conducts fundamental research both directly and in collaboration with other organizations (mostly research universities). In its 120+ year history, it has contributed scores of foundational discoveries– e.g., the expanding universe, the existence of dark matter, transposons (“jumping genes”)– across multiple scientific disciplines. Its principals have won multiple Nobel Prizes (and myriad other awards) and have contributed to scientific and technical policy (e.g., Carnegie President Vannevar Bush) and to scientific education.

“Our research universities are the best in the world. But a leadership position is easy to lose and difficult to regain.”…
Revisiting a key topic that we’ve touched before…
The modern U.S. research universities arose in the late 19th century. Their work has laid the foundation for major advances in health and medicine, technology, communications, agriculture/food, economics, energy, and national security at the same time that they have educated students to be scientific, technical, commerical, and cultural leaders and innovators.
Today, as a product of what historians have called a “virtuous circle of incentives and resources,” American academic research institutions are top of the pops… and not at all coincidentally, so is the U.S economy:
… But that dominance is under attack, both by the Trump Administration and by state governments around the country actively undermining the work of their state universities.
It’s worth remembering that, into the early twentieth century, German Universities– the original models for the American approach— dominated the list.
As the U.S. increasingly models the behavior of German authorities in the 1930s, the vital contributions of research univerisities are at risk.
When Hitler rose to power in the 1930s, the leaders of America’s most august universities didn’t all comport themselves as one might have wished. We can only hope that this time– as the threat is aimed directly at them– they will respond more strongly and directly.
Meantime, we can all add our voices to the defense of academic freedom and support for vital research.
* Research Universities and the Future of America, a report from The National Research Council, 2012 (Page 68)
###
As we cease self-sabotage, we might spare a thought for a professorial paragon of the virtues of the institutions in question (in his case, on the cultural as opposed to the scientific/technical front), George Lyman Kittredge, a professor at Harvard; he died on this date in 1941. Kittredge’s edition of Shakespeare’s work was the scholarly standard in the early 20th century; he promoted the study of folklore and folk songs (encouraging students like John A. Lomax, and thus Lomax’s son, Alan); and he was instrumental in the formation and management of the Harvard University Press.








You must be logged in to post a comment.