(Roughly) Daily

Posts Tagged ‘critical thinking

“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…

Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…

Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.

Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.

To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).

Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.

To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.

In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…

… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.

We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.

I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.

But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…

While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.

This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.

The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…

… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.

But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.

With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).

Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.

So then what’s the take-away?

For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:

It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.

What a lovely new concern for parents we’ve invented!

Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.

Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.

For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.

In other words, pro-human bias and AI distrust are cognitively beneficial.

It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.

The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.

Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?

* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b

###

As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861.  Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.

But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).

“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”

 source

“You simply cannot invent any conspiracy theory so ridiculous and obviously satirical that some people somewhere don’t already believe it”*…

As Greg Miller explains, conspiracy theories seem to meet psychological needs and can be almost impossible to eradicate. But there does appear to be a remedy: keep them from taking root in the first place…

If conspiracy theories are as old as politics, they’re also — in the era of Donald Trump and QAnon — as current as the latest headlines. Earlier this month, the American democracy born of an eighteenth century conspiracy theory faced its most severe threat yet — from another conspiracy theory, that (all evidence to the contrary) the 2020 presidential election was rigged. Are conspiracy theories truly more prevalent and influential today, or does it just seem that way?

The research isn’t clear. Rosenblum and others see evidence that belief in conspiracy theories is increasing and taking dangerous new forms. Others disagree. But scholars generally do agree that conspiracy theories have always existed and always will. They tap into basic aspects of human cognition and psychology, which may help explain why they take hold so easily — and why they’re seemingly impossible to kill.

Once someone has fully bought into a conspiracy theory, “there’s very little research that actually shows you can come back from that,” says Sander van der Linden, a social psychologist at the University of Cambridge whose research focuses on ways to combat misinformation. “When it comes to conspiracy theories, prevention is better than cure.”

Talking a true believer out of their belief in a conspiracy can be nearly impossible. (The believer will assume you’re hopelessly naïve or, worse, that you’re part of the cover-up). Even when conspiracy theories have bold predictions that don’t come true, such as QAnon’s claim that Trump would win reelection, followers twist themselves in logical knots to cling to their core beliefs. “These beliefs are important to people, and letting them go means letting go of something important that has determined the way they see the world for some time,” says [Karen Douglas, a psychologist who studies conspiracy thinking at the University of Kent in the United Kingdom].

As a result, some researchers think that preventing conspiracy theories from taking hold in the first place is a better strategy than fact-checking and debunking them after they do — and they have been hard at work developing and testing such strategies. Van der Linden sees inoculation as a useful metaphor here. “I think one of the best solutions we have is to actually inject people with a weakened dose of the conspiracy…to help people build up mental or cognitive antibodies,” he says.

One way he and his colleagues have been trying to do that (no needles required) is by developing online games and apps. In a game called Bad News, for example, players assume the role of a fake news creator trying to attract followers and evolve from a social media nobody into the head of a fake-news empire…

The critical question — pushing the vaccine metaphor to its limits — is how to achieve herd immunity, the point at which enough of the population is immune so that conspiracy theories can’t go viral. It might be difficult to do that with games because they require people to take the time to engage, says Gordon Pennycook, a behavioral scientist at the University of Regina in Canada. So Pennycook has been working on interventions that he believes will be easier to scale up.

Even as researchers push to develop such measures, they acknowledge that eradicating bogus conspiracy theories may not be possible. Conspiracy theories flourished as far back as the Roman Empire, and they inspired an angry mob to storm the U.S. Capitol just last week. Specific theories may come and go, but the allure of conspiracy theories for people trying to make sense of events beyond their control seems more enduring. For better — and of late, very much for worse — they appear to be a permanent part of the human condition…

Eminently worth reading in full: “The enduring allure of conspiracies, ” from @dosmonos in @NiemanLab.

* Robert Anton Wilson

###

As we fumble with the fantastic, we might send prodigious birthday greeting to G.K. Chesterton; he was born on this date in 1874.  The author of 80 books, several hundred poems, over 200 short stories, 4000 essays, and several plays, he was a literary and social critic, historian, playwright, novelist, Catholic theologian and apologist, debater, and mystery writer. Chesterton was a columnist for the Daily News, the Illustrated London News, and his own paper, G. K.’s Weekly, and wrote articles for the Encyclopædia Britannica.  Chesterton created the priest-detective Father Brown, who appeared in a series of short stories, and had a huge influence on the development of the mystery genre; his best-known novel is probably The Man Who Was Thursday.

Chesterton’s faith, which he defended in print and speeches, brought him into conflict with the most famous atheist of the time, George Bernard Shaw, who said (on the death of his “friendly enemy”), “he was a man of colossal genius.”

The lunatic is the man who lives in a small world but thinks it is a large one; he is the man who lives in a tenth of the truth, and thinks it is the whole. The madman cannot conceive any cosmos outside a certain tale or conspiracy or vision.

G. K. Chesterton
George Bernard Shaw, Hilaire Belloc, and G. K. Chesterton

source