(Roughly) Daily

Posts Tagged ‘thinking

“We shall require a substantially new manner of thinking if mankind is to survive”*…

A close-up of the sculpture 'The Thinker' by Auguste Rodin, depicting a contemplative male figure seated on a rock, resting his chin on his hand with a thoughtful expression.

The estimable Robin Sloan on the challenge of keeping our language– our words and our use of them– up to the task of wrestling with our present and our future…

The overloading of common words is well underway: new language models have “thinking” modes, “reasoning” capabilities! What this means, in practice, is that they’ve learned to produce a special kind of text, the conversion of the linguistic if-then into a dynamo that spins and spins and, often, magically — yes, it is magical — produces useful results.

Here is one distinction among several: this process can only compound — the models can only “think” by spooling out more text — while human thinking often does the opposite: retreats into silence, because it doesn’t have words yet to say what it wants to say.

Human thinking often washes the dishes, then goes for a walk.

So, if you redefine “thinking” to mean “arriving at a solution through an iterative linguistic loop” … yes, that’s what these models do. That definition is IMHO pretty thin.

We talk about humans thinking harder, which is not the same as thinking longer. I think most people know from experience that thinking longer generally just makes you anxious. But that’s what the models do, and not only longer, but in parallel, all those step-by-step monologues spilling out simultaneously, somewhere in the dark of a data center. “Quantity has a quality all its own,” said Stalin, maybe … 

Well, okay — what does it mean for a human to think harder? Reasonable people will disagree (and in interesting ways) but, for my part, I think it means prospecting new analogies; pitching your inquiry out away from the gravitational attractors of protocol and cliché; turning the workpiece around to inspect it from new angles; and especially bringing more senses into the mix — grounding yourself in reality. You’ll note these moves are challenging or impossible for systems that operate only on/with/inside language.

A couple of years ago, when I wondered if language models are in hell, I expressed some hope about the richness of multimodal training. So far, this hasn’t panned out. Rather than images anchoring text in a richer, more embodied realm, the marriage seems to have gone the opposite direction. The models chop images into sequences of tokens — big bright pictures become spindly threads, a bit sad — and feed them in along with everything else.

We are going to lose this word — we might already have lost it — but/and we can put a marker down; a gravestone, you might call it; for a kind of thinking that used to mean more than “more”.

Other useful words, still with us, include: imagination, ingenuity, insight. Clarity, most of all. Clarity is what Einstein was seeking when he sat and thought hard about the relative motion of magnets and conductors. He wanted to push through language, beyond it, beyond even the formalism of physics — because there wasn’t physics yet for the things he wanted to understand.

I am still waiting for models that aspire to pack complex systems — whole economies — into high-dimensional space, “hold it all in their heads”, then make observations and predictions way out beyond the if-then of “reasoning” language.

Think harder!

Thinking modes,” from Sloan’s wonderful newsletter.

Pair with “Horseless Carriages, Digital Paint, AI,” Quentin Hardy‘s meditation on the ways in which new technologies shape both our language(s) and the ways we think (from Hardy’s also-wonderful newsletter).

[Image above: Rodin, “The Thinker” (source)]

* Albert Einstein

###

As we ponder pondering, we might recall that it was on this date in 1963 that “Louie Louie” by the Kingsmen entered the Billboard Hot 100.

For more on how the record came to be (and the ruckus over language that followed), see here (and here and here).

Written by (Roughly) Daily

November 9, 2025 at 1:00 am

“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…

Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…

Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.

Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.

To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).

Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.

To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.

In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…

… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.

We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.

I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.

But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…

While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.

This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.

The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…

… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.

But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.

With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).

Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.

So then what’s the take-away?

For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:

It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.

What a lovely new concern for parents we’ve invented!

Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.

Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.

For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.

In other words, pro-human bias and AI distrust are cognitively beneficial.

It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.

The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.

Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?

* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b

###

As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861.  Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.

But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).

“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”

 source

“We shape our tools and thereafter our tools shape us”*…

A late 19th C. illustration of 18th-C. people, gobsmacked by the many tech changes that have made their world irrelevant

AI is on the march, with implications, TBD, for… well, for everything. Nayef Al-Rodhan ponders its potential impact on philosophy…

Around the world, Artificial Intelligence (AI) is seeping into every aspect of our daily life, transforming our computational power, and with it the manufacturing speed, military capabilities, and the fabric of our societies. Generative AI applications such as OpenAI’s ChatGPT, the fastest growing consumer application in history, have created both positive anticipation and alarm about the future potential of AI technology. Predictions range from doomsday scenarios describing the extinction of the human species to optimistic takes on how it could revolutionise the way we work, live and communicate. If used correctly, AI could catapult scientific, economic and technological advances into a new phase in human history. In doing so it has the potential to solve some of humanity’s biggest problems by preventing serious food and water scarcitymitigating inequality and povertydiagnosing life-threatening diseases, tackling climate change, preventing pandemics, designing new game-changing proteins, and much more. 

AI technology is rapidly moving in the direction of Artificial General Intelligence (AGI), the ability to achieve human-level machine intelligence, with Google’s AI Chief recently predicting that there is a 50% chance that we’ll reach AGI within five years. This raises important questions about our human nature, our sentience, and our dignity needs. Can AI ever become truly sentient? If so, how will we know if that happens? Should sentient machines share similar rights and responsibilities as humans? The boardroom drama at OpenAI in late November 2023 also deepened the debate about the dangers of techno-capitalism: is it possible for corporate giants in the AI space to balance safety with the pursuit of revenues and profit? 

As AI advances at a breakneck speed, ethical considerations are becoming increasingly critical. Sentient AI implies that the technology has the capacity to evolve and be self-aware, in doing so feeling and experiencing the world just like a human would. According to the British mathematician Alan Turing, if the human cannot distinguish between whether it is conversing with an AI or another human, then the AI in question has passed the test. However, given AI’s sophisticated conversational skills and ability to give the impression of consciousness, the Turing Test is becoming too narrow and does not grasp all the nuances of what makes us sentient and, more broadly, human. To stay on the front foot of technological progress, we need to supplement the Turing Test with transdisciplinary frameworks for evaluating increasingly human-like AI. These frameworks should be based on approaches rooted in psychology, neuroscience, philosophy, the social sciences, political science and other relevant disciplines. 

We do not yet have a full understanding of what makes a thing sentient but transdisciplinary efforts by neuroscientists, computer scientists and philosophers are helping develop a deeper understanding of consciousness and sentience. So far, we have found that emotions are one of the important characteristics needed for sentience, as is agency or intrinsic motivation. A sentient AI would need to have the ability to create autonomous goals and an ability to pursue these goals. In human beings, this quality has evolved from our intrinsic survival instinct, while in AI it is still, for now, lacking. According to recent studies, a sense of time, narrative, and memory is also critical for determining sentience. A level of sentence comparable to humans would require autobiographical memory and a concept of the linear progression of time. In current AI systems, these capabilities are limited – but recent developments raise uncomfortable philosophical questions about whether sentient AI should share similar rights and responsibilities in the event that it becomes a reality. And if so, how does one hold the technology accountable for their actions? And how will we define – legally and ethically – sentient AI’s role in society? We currently treat AI technology and machines as property, so how will this change if they are granted their own rights? There is no clear-cut answer, but as I argued in ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’, we should attribute agency to machines whenever they appear to possess the same qualities that characterise humans. I also believe that machines ought to be treated as agents if they prove themselves to be emotional, amoral, and egoist. 

These debates, however they unfold, will clearly have deep implications on the future of philosophy itself. In ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’ I make the case that it is a short step from AI’s present capabilities to its potential future use developing novel philosophical hypotheses and thought experiments. It is therefore not unthinkable that future AI systems could break new ground in the field of normative ethics, helping pinpoint moral principles that human philosophers have failed to grasp. However, we should be mindful that their conception of morality or beauty, for example, may have nothing in common with ours, or it may supersede our own capacities and reflections. This could limit the ability of sophisticated artificial agents to answer long-standing philosophical questions, however superior they may be to the most advanced human intellectual output. We should consider how these developments are likely to impact how we understand the world around us, both in terms of the subject matter and of the theorising entity involved. Artificial agents will no doubt be put under the microscope and will be studied alongside the human mind and human nature: not just to compare and contrast, but also to understand how these artificial entities relate to – and treat – one another, and humanity itself. There is also the question of how human philosophers will react if and when AI-steered machines become superior philosophical theorisers. Will flesh and blood philosophers be forced to compete cognitively with entities whose intellectual abilities vastly supersede our own? Will AI systems overtake our limited human reasoning and reflective capacities? If this happens, what does this mean for our own human agency, the control we have over our lives and the future of our societies?…

… Powerful AI technologies will progressively increase our capabilities, for good or ill. We therefore need to be clear-sighted about the AI governance frameworks urgently needed to futureproof the safe use of AI. The recent high drama at OpenAI, whose founding mission is “to ensure that artificial general intelligence benefits all of humanity”, gave us a glimpse of the main rift in the AI industry, pitting those focused on commercial growth against those uneasy with the potential ramifications of the unbridled development of AI. However well-motivated AI governance schemes might be, they are less robust than one would hope. At the same time, self-regulation by global tech companies is becoming increasingly difficult given the large sums at stake and the economic and political influence of these companies.

With this in mind, we must keep an open mind not just about the immediate man-made dangers of AI technologies but also their potential to redefine what it means to be human. They will shape how we understand and engage with the world, in doing so making us reevaluate our place in it. Our chances of survival as a species and the likelihood of our existence in a free, independent, peaceful, prosperous, creative and dignified world will depend on the future trajectory of AI. Our historical yearning for longing and belonging hangs in the balance. To protect citizens from potential harm and limit the risks, AI should be regulated just like any other technology. We must also apply transdisciplinary approaches to make sure that the use and governance of AI is always steered by human dignity needs for all, at all times and under all circumstances. AI’s trajectory is not predetermined, but the clock is ticking and humanity may have less time than it thinks to control its collective destiny… 

Eminently worth reading in full. Whether or not one agrees with the author’s specific conclusions, his larger point– that we need to be mindful and purposive about the deployment of AI is surely well-taken: “Sentience, Safe AI and The Future of Philosophy: A Transdisciplinary Analysis,” from @SustainHistory in @oxpubphil.

See also: “Thinking About AI, Before AI Disappears” from Quentin Hardy‘s new newsletter, Technohumanism. (source of image above).

* Father John Culkin, SJ, a Professor of Communication at Fordham University (and friend of Marshall McLuhan, to whom the quote is often incorrectly attributed)

###

As we think about thinking, we might recall that it was on this date in 1979 that Apple began work on the Lisa, which would become the world’s first commercial computer with a graphical user interface.

Originally intended to sell for $2,000 and ship in 1981, the Lisa was delayed until 1983 and sold for $10,000. Utilizing technology ahead of its time, its high cost, relative lack of software, and some hardware reliability issues ultimately sank the success of the Lisa. Still, much of the technology introduced by the Lisa (itself rooted in the earlier work of Doug Engelbart [and here] and Xerox PARC) influenced the development of the Macintosh as well as other future computer and operating system designs: e.g., a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), (Ethernet) networking, file servers, print servers, and email.

The Lisa, with its development team (source)

“Criticism may not be agreeable, but it is necessary. It fulfills the same function as pain in the human body. It calls attention to an unhealthy state of things.”*…

The estimable Henry Farrell on why, on average, we’re better at criticizing others than thinking originally ourselves…

… our individual reasoning processes are biased in ways that are really hard for us (individually) to correct. We have a strong tendency to believe our own bullshit. The upside is that if we are far better at detecting bullshit in others than in ourselves, and if we have some minimal good faith commitment to making good criticisms, and entertaining good criticisms when we get them, we can harness our individual cognitive biases through appropriate group processes to produce socially beneficial ends. Our ability to see the motes in others’ eyes while ignoring the beams in our own can be put to good work, when we criticize others and force them to improve their arguments. There are strong benefits to collective institutions that underpin a cognitive division of labor.

This superficially looks to resemble the ‘overcoming bias’/’not wrong’ approaches to self-improvement that are popular on the Internet. But it ends up going in a very different direction: collective processes of improvement rather than individual efforts to remedy the irremediable. The ideal of the individual seeking to eliminate all sources of bias so that he (it is, usually, a he) can calmly consider everything from a neutral and dispassionate perspective is replaced by a Humean recognition that reason cannot readily be separated from the desires of the reasoner. We need negative criticisms from others, since they lead us to understand weaknesses in our arguments that we are incapable of coming at ourselves, unless they are pointed out to us…

… It’s not about a radical individual virtuosity, but a radical individual humility. Your most truthful contributions to collective reasoning are unlikely to be your own individual arguments, but your useful criticisms of others’ rationales. Even more pungently, you are on average best able to contribute to collective understanding through your criticisms of those whose perspectives are most different to your own, and hence very likely those you most strongly disagree with. The very best thing that you may do in your life is create a speck of intense irritation for someone whose views you vigorously dispute, around which a pearl of new intelligence may then accrete…

… One of my favourite passages from anywhere is the closing of Middlemarch, where Eliot says of Dorothea:

“Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.”

Striving to be a Dorothea is a noble vocation, and likely the best we can hope for in any event; sooner or later, we will all be forgotten. In the long course of time, all of our arguments and ideas will be broken down and decomposed. At best we may hope, if we are very lucky, that they will contribute in some minute way to a rich humus, from which plants that we will never see or understand might spring.

Eminently worth reading in full: “In praise of negativity,” from @henryfarrell.

* Winston Churchill

###

As we contemplate the constructive, we might recall that it was on this date in 1871 that a discipline wholly dependent on incorporating corrective critique into its methods was founded:  Cleveland Abbe became the founding chief scientist– effectively the head– of the newly formed U.S. Weather Service (later named the Weather Bureau; later still, the National Weather Service). 

Abbe had started the first private weather reporting and warning service (in Cincinnati) and had been issuing weather reports or bulletins since 1869 and was the only person in the country at the time who was experienced in drawing weather maps from telegraphic reports and forecasting from them. The first U.S. meteorologist, he is known as the “father of the U.S. Weather Bureau,” where he systemized observation, trained personnel, and established scientific methods.  He went on to become one of the 33 founders of the National Geographic Society.

source

“I will buckle down to work as soon as I finish reading the Internet”*…

From Aldobrandino da Siena’s Le Régime du corps (1265-70 CE)

Worried that technology is “breaking your brain:? As Joe Stadolnik explains, fears about attention spans and focus are as old as writing itself…

If you suspect that 21st-century technology has broken your brain, it will be reassuring to know that attention spans have never been what they used to be. Even the ancient Roman philosopher Seneca the Younger was worried about new technologies degrading his ability to focus. Sometime during the 1st century CE, he complained that ‘The multitude of books is a distraction’. This concern reappeared again and again over the next millennia. By the 12th century, the Chinese philosopher Zhu Xi saw himself living in a new age of distraction thanks to the technology of print: ‘The reason people today read sloppily is that there are a great many printed texts.’ And in 14th-century Italy, the scholar and poet Petrarch made even stronger claims about the effects of accumulating books:

Believe me, this is not nourishing the mind with literature, but killing and burying it with the weight of things or, perhaps, tormenting it until, frenzied by so many matters, this mind can no longer taste anything, but stares longingly at everything, like Tantalus thirsting in the midst of water.

Technological advances would make things only worse. A torrent of printed texts inspired the Renaissance scholar Erasmus to complain of feeling mobbed by ‘swarms of new books’, while the French theologian Jean Calvin wrote of readers wandering into a ‘confused forest’ of print. That easy and constant redirection from one book to another was feared to be fundamentally changing how the mind worked. Apparently, the modern mind – whether metaphorically undernourished, harassed or disoriented –­ has been in no position to do any serious thinking for a long time.

In the 21st century, digital technologies are inflaming the same old anxieties… and inspiring some new metaphors…

Same as it ever was– a history of the anxieties about attention and memory that new communications technologies have occasioned through history: “We’ve always been distracted,” from @joestadolnik in @aeonmag.

* Stewart Brand @stewartbrand

###

As we learn our way into new media, we might recall that it was it was on this date in 1946 that the first first Washington, D.C. – New York City telecast was accomplished, using AT&T corporation’s coaxial cable; General Dwight Eisenhower was seen to place a wreath at the base of the statue in the Lincoln Memorial and others made brief speeches. The event was judged a success by engineers, although Time magazine called it “as blurred as an early Chaplin movie.”

1946 television (source)

Written by (Roughly) Daily

February 18, 2023 at 1:00 am