Posts Tagged ‘education’
“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…
Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…
Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.
Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.
To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).
Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.
To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.
In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…
… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.
We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.
I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.
But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…
While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.
This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.
The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…
… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.
But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.
With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).
Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.
So then what’s the take-away?
For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:
It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.
What a lovely new concern for parents we’ve invented!
Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.
Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.
For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.
In other words, pro-human bias and AI distrust are cognitively beneficial.
It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.
The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.
Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?“
* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b
###
As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861. Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.
But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).
“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”
“Human society, the world, and the whole of mankind is to be found in the alphabet”*…
… and so we endeavor to teach the alphabet to young children. Hunter Dukes on an amusing– and revealing– example from the 18th century…
It’s as easy as ABC! It’s as easy as pie! In an abecedarium titled The Tragical Death of a Apple-Pye, both idioms come true, as children learn an alphabet whose letters greedily gorge on pastry.
The edition featured here was published by John Evans, a major contender in late eighteenth- and early nineteenth-century children’s literature. His formula was simple: undercut the competition, including John Newbery’s firm, by selling unprecedentedly affordable books. He captured an emerging market: children’s books for hard up families who had managed, against the odds, to acquire literacy. And while his competitors targeted a middle-class audience, Evans “stayed true to the street literature tradition in which he had been brought up”, writes literary historian Jonathan Cooper, who gives 1793–1796 as the likely date for Apple-Pye. It was printed on a press at No. 41 Long Lane, West Smithfield, and sold for a halfpenny, like Evans’ other sixteen-page chapbooks — a tiny format, roughly measuring 3.5 inches tall by 2.25 inches wide.
The book is really three texts in one. First comes an ABC list in which the “life and death” of an apple pie plays out across the alphabet. “Apple Pye, Bit it, Cut it, Dealt it, Eat it . . . Took it, View’d it, Wanted it, X, Y, Z, and &, they all wish’d for a piece in hand.” With so many letters vying for a slice, they decide together on an equitable solution: “They all agreed to stand in order / Round the Apple Pye’s fine border / Take turn as they in hornbook stand, / From great A, down to &”.
Next we encounter “A Curious Discourse That Passed Between the Twenty Five Letters at Dinner-Time”. The abecedarian order repeats, but now the letters speak. “Says A, give me a good large slice. . . . Says I, I love the juice the best.” Finally, Evans includes some self-promotion — “if my little readers are pleased with what they have found in this book, they have nothing to do but to run to Mr. Evans’s” — and a woodcut picture of “the old woman who made the Apple Pye”, which transitions abruptly into Christian pedagogy: “Grace before meat”, “Grace after meat”, “The Lord’s Prayer”. Like in other eighteenth-century children’s books, such as The Renowned History of Giles Gingerbread, learning here is figured as a kind of gustatory consumption: children eat up the alphabet lesson, while its glyphic personifications wolf down their slices. (The link between sweets and syllabaries is more ancient still: Horace recorded teachers bribing pupils with letter-shaped biscuits to encourage their alphabetical uptake.)
Evans’ edition was published in the late eighteenth century — reworking a primer by Richard Marshall from the 1760s — but The Tragical Death of a Apple Pye is perhaps an even older story, first published, according to some scholars, in 1671. For a modern reader, it preserves English paleography as it existed in an earlier state: across the sections, U and V are used interchangeably, like I and J, and “&” is the ultimate letter, after Z. In an attempt to offset the ampersand’s semiotic difference, teachers well into the nineteenth century instructed students to pronounce the final letters of the alphabet as “x, y, z, and per se &”, hiving off the ampersand with the Latin by itself…
“Peckish Alphabetics: The Tragical Death of a Apple-Pye,” from @hunterdukes.bsky.social in @publicdomainrev.bsky.social.
More on (and many more illustrations, including the image at the top, from) TTDoaAP here, via “The Gentle Author.”
* Victor Hugo
###
As we learn our letters, we might send instuctive birthday greetings to a woman still hoeing this row: Denise Fleming; she was born on this date in 1950. An award-winning illustrator and creator of children’s books, she has written dozens of volumes for the very young, among which was her contribution to the tradition of which Evans was a part…
“Human intelligence is among the most fragile things in nature. It doesn’t take much to distract it, suppress it, or even annihilate it.”*…
As Sarah O’Connor observes, technology has changed the way many of us consume information, from complex pieces of writing to short video clips…
The year was 1988, a former Hollywood actor was in the White House, and Postman was worried about the ascendancy of pictures over words in American media, culture and politics. Television “conditions our minds to apprehend the world through fragmented pictures and forces other media to orient themselves in that direction,” he argued in an essay in his book Conscientious Objections. “A culture does not have to force scholars to flee to render them impotent. A culture does not have to burn books to assure that they will not be read . . . There are other ways to achieve stupidity.”
What might have seemed curmudgeonly in 1988 reads more like prophecy from the perspective of 2024. This month, the OECD released the results of a vast exercise: in-person assessments of the literacy, numeracy and problem-solving skills of 160,000 adults aged 16-65 in 31 different countries and economies. Compared with the last set of assessments a decade earlier, the trends in literacy skills were striking. Proficiency improved significantly in only two countries (Finland and Denmark), remained stable in 14, and declined significantly in 11, with the biggest deterioration in Korea, Lithuania, New Zealand and Poland.
Among adults with tertiary-level education (such as university graduates), literacy proficiency fell in 13 countries and only increased in Finland, while nearly all countries and economies experienced declines in literacy proficiency among adults with below upper secondary education. Singapore and the US had the biggest inequalities in both literacy and numeracy.
“Thirty per cent of Americans read at a level that you would expect from a 10-year-old child,” Andreas Schleicher, director for education and skills at the OECD, told me — referring to the proportion of people in the US who scored level 1 or below in literacy. “It is actually hard to imagine — that every third person you meet on the street has difficulties reading even simple things.”
In some countries, the deterioration is partly explained by an ageing population and rising levels of immigration, but Schleicher says these factors alone do not fully account for the trend. His own hypothesis would come as no surprise to Postman: that technology has changed the way many of us consume information, away from longer, more complex pieces of writing, such as books and newspaper articles, to short social media posts and video clips.
At the same time, social media has made it more likely that you “read stuff that confirms your views, rather than engages with diverse perspectives, and that’s what you need to get to [the top levels] on the [OECD literacy] assessment, where you need to distinguish fact from opinion, navigate ambiguity, manage complexity,” Schleicher explained.
The implications for politics and the quality of public debate are already evident. These, too, were foreseen. In 2007, writer Caleb Crain wrote an article called “Twilight of the Books” in The New Yorker magazine about what a possible post-literate culture might look like. In oral cultures, he wrote, cliché and stereotype are valued, conflict and name-calling are prized because they are memorable, and speakers tend not to correct themselves because “it is only in a literate culture that the past’s inconsistencies have to be accounted for”. Does that sound familiar?…
One recalls Plato’s report that Socrates lamented the introduction of writing (on the grounds that it would erode the centrality of the memory and memorization and the tradition of oral disputation). And one reckons that in retrospect, even as one acknowledges that Socrates wasn’t wrong, one is not sorry that writing came to play the foundational role that it has in scholarship, culture, and commerce.
So perhaps we’re just in the first steps of a transition on the other side of which a new kind of literacy has displaced the current one (and advanced our state of being in the same way that writing has). Perhaps. Even then, in the moment it’s anxiety-provoking: even if we are bound for a new (higher-order?) literacy, it’s the curse of the earlier phases of a tectonic cultural shift that what we’re losing is much clearer than what we may gain.
“Are we becoming a post-literate society?” (gift article) by @sarahoconnorft.bsky.social in @financialtimes.com.
(The full OECD report- which includes a larger version of the chart above– is here.)
See also: “Stop speedrunning to a dystopia,” from Erik Hoel.
* Neil Postman, Amusing Ourselves to Death
###
As we fumble toward the future, we might recall that it was on this date in 1992 that HAL 9000, the AI character (and main antagonist) in Arthur C. Clarke’s (and Stanley Kubrick’s) Space Odyssey series.
More specifically: In the film, HAL became operational on 12 January 1992, at the HAL Laboratories in Urbana, Illinois, as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke’s novel written and released in conjunction with the movie.
“Mathematics, rightly viewed, possesses not only truth, but supreme beauty”*…
Mark Frauenfelder at Boing Boing with a glorious memory…
This cover from the July 1965 issue of Scientific American illustrates the “Four Bugs Problem” featured in Martin Gardner’s “Mathematical Games” column about op art [see here].
The setup: Four bugs are placed at the corners of a square. They start crawling clockwise (or counterclockwise) at a constant rate, with each bug moving directly toward its neighbor. As the bugs move, they always form the corners of a square that both diminishes in size and rotates. Each bug’s path forms a logarithmic spiral.
Gardner said this can be generalized to any number of bugs starting at the corners of a regular polygon with n sides. In these cases, the bugs will always form the corners of a similar polygon that shrinks and rotates as they move.
Here’s an animated version of the Four Bugs Problem you can try out. If you want to try it with a different number of bugs, go here.
Your correspondent still has his copy of that issue. “The beautiful ‘Four Bugs Problem’” from @Frauenfelder in @BoingBoing.
* Bertrand Russell, A History of Western Philosophy
###
As we marvel, we might send carefully-calculated birthday greetings to Ian Stewart; he was born on this date in 1945. As a teenager, he was an avid reader of Gardner’s “Mathematical Games,” from which he developed a love of the subject that led him to become a mathematician who has gone on to make important contributions to the field, especially in catastrophe theory.
But Stewart is more widely known as a popularizer of math– who credits Gardner with modeling the skills needed to be an entertaining communicator. Indeed, from 1991 to 2001 Stewart took over the Scientific American column (which had been renamed “Mathematical Recreations”).
For a list of his (remarkable) books on math and science, see here.
“Like everything metaphysical, the harmony between thought and reality is to be found in the grammar of the language”*…
Hunter Dukes in Public Domain Review, on how scholars and pedagogues in the U.S. began to illustrate the principles of grammar, more specifically, how they began to diagram sentences…
“Once you really know how to diagram a sentence, really know it, you know practically all you have to know about English grammar”, Gertrude Stein once claimed. “I really do not know that anything has ever been more exciting than diagramming sentences. . . I like the feeling the everlasting feeling of sentences as they diagram themselves.” While one student’s lexical excitement is surely another’s slow death by gerund, Stein cuts to the heart of the grammatical pull. Is grammar prescriptive and conventional, something one learns to impose on language through trial and error? Or do sentences, in a sense, diagram themselves, revealing an innate logic and latent structure in language and the mind? More than a century before Noam Chomsky popularized the idea of a universal grammar, linguists in the United States began diagramming sentences in an attempt to visualize the complex structure — of seemingly divine origins — at their mother tongue’s core.
The history of diagramming sentences in the United States begins with James Brown’s American Grammar (1831). “Language is an emanation from God”, he writes. “As a gift, it claims our servitude; as a science, it demands our highest attention.” Accordingly, the student of grammar can lift himself up (educationally, devotionally) by knuckling down. “The mind becomes a passenger; the body his chariot; ideas his baggage; the earth his inn; hope his food; and another world his destination.” It was in American Grammar that Brown debuted construing as a method for parsing sentences using a system of square and round brackets to isolate major and minor sections. Major sections are “mechanically independent”; minor sections are “mechanically dependent”. Brown called this form of analysis close reading, but construing was only one half of the system. “As construing is a critical examination of the constructive relation between the sections of a sentence, so scanning is a critical investigation of the constructive relation between the words of a section.” Scanning involves ranking minor sections in ascending numerical order based on their relational distance from the major section. Playing a kind of grammarian god, Brown uses John 1 to demonstrate how his system can cleave sentential flesh. (In the beginning) [was the word] (and the word was) (with God) (and the word was God)…
Dukes goes on to trace, with wonderful examples, those who followed Brown into the syntactical thicket; for example…
More mesmerizing examples at “American Grammar: Diagraming Sentences in the 19th Century,” from @hunterdukes in @PublicDomainRev, with links to the original texts at the invaluable Internet Archive (@internetarchive).
* Ludwig Wittgenstein
###
As we parse, we might spare a thought for a man whose sentences were eminently diagrammable, Kenneth Grahame; he died on this date in 1932. A career officer at the Bank of England–he retired as its Secretary– he is better remembered as the author of tales he created to delight his son Alastair, The Wind in the Willows and The Reluctant Dragon (both of which were made into films by Disney: The Adventures of Ichabod and Mr. Toad and The Reluctant Dragon).
















You must be logged in to post a comment.