(Roughly) Daily

Posts Tagged ‘process

“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…

Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…

Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.

Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.

To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).

Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.

To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.

In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…

… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.

We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.

I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.

But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…

While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.

This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.

The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…

… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.

But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.

With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).

Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.

So then what’s the take-away?

For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:

It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.

What a lovely new concern for parents we’ve invented!

Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.

Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.

For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.

In other words, pro-human bias and AI distrust are cognitively beneficial.

It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.

The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.

Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?

* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b

###

As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861.  Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.

But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).

“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”

 source

“What was scattered gathers. What was gathered blows away.”*…

 

Process philosophy

 

One of the most ubiquitous assumptions in Western thinking is a metaphysics of substance. This way of seeing the world is so deeply embedded in the unthought wilderness at the back of our minds that it rarely occurs to us to even consider it. Whatever else we may come to blows about, we almost all feel justified in leaning on the idea that our world fundamentally consists of “things”, objects that exist, solid entities that have an identity, possess properties, fill space and so on. All of us, apart from one small minority that thinks differently: the valiant tradition of process philosophers.

Process philosophy holds that the world consists not of objects but of processes, that the fundamental mode of things is not being but doing, that the nature of a thing consists not in what it is but in what is does. Traditionally, we see events as being done by things; there are objects, and acts are predicated of them (the bird flies, the fish swims, the sun shines). Process philosophers see the doing as primary: not that the bird flies, but that there is one might say a “birding” which throughout a certain duration “is birding flying-ly”.

At this point many readers will begin reeling back with discomfort. Here we see that the very way our language structures our thinking makes it difficult to even comprehend a processual point of view. And the next step in this repulsion is to ask “why bother?”. After all, ostensibly there are such things as birds, and they do appear to fly, so why bother with such eccentric semi-nonsense as “birding flying-ly” when saying “the bird flies” perfectly communicates a perception of reality that nearly everyone can acknowledge without dispute?

Because if it is true, as process philosophers claim, that the way we habitually think and talk about our world merely biases us towards a certain way of conceptualising it, then this bias will spread outward beyond metaphysics to every other discipline that concerns the world (i.e. all of them). Process ontology is not merely a more eccentric way of describing the world, but a tool that helps us uncover truths and practice effective strategies we otherwise would never have envisaged…

Thinking differently: “A Cosmos of Flux: The Case for Process Philosophy.”

* Heraclitus

###

As we go with the flow, we might send adventurous birthday greetings to Gerald “Gerry” Malcolm Durrell; he was born on this date in 1925.  A British naturalist, zookeeper, conservationist, author, and television presenter, most of his work was rooted in his life as an animal collector and enthusiast… though he is probably most widely known for his autobiographical book My Family and Other Animals and its successors, Birds, Beasts, and Relatives and The Garden of the Gods... which have been made into television and radio mini-series many times, most recently as ITV’s/PBS’s The Durrells.

 source

 

 

Written by (Roughly) Daily

January 7, 2020 at 1:01 am

“Life is about using the whole box of crayons”*…

 

How Crayola Crayons Are Made

* RuPaul

###

As we muse on Mango Tango, we might recall that it was on this date in 1895 that Frederick E. Blaisdell of Philadelphia, Pennsylvania, was granted U.S. patent No. 549,952 for a paper pencil, (self-sharpening pencils, on which the tip could be renewed by peeling away some of the paper barrel– the type better known as “china markers” today), as well as patent number 550,212 for a machine for manufacturing pencils.

 source

 

 

Written by (Roughly) Daily

November 19, 2014 at 1:01 am