Posts Tagged ‘knowledge’
“The Times They Are A-Changin'”*…

Further to an earlier post in his wonderful newsletter The Honest Broker, Ted Gioia offers a provocative (and for your correspondent’s money, ultimately optimistic) forecast…
Would you believe me if I told you that the biggest news story of our century is happening right now—but is never mentioned in the press?
That sounds crazy, doesn’t it?
But that is often the case when a bold new worldview appears.
The biggest changes often happen long before they even get a name. By the time the scribes notice, the world is already reborn.
- How long did it take before the Renaissance got mentioned in the town square?
- When did newspapers start covering the Enlightenment?
- Or the collapse in mercantilism?
- Or the rise of globalism?
- Or the birth of Christianity or Islam or some other earthshaking creed?
You can take this to the bank: If the New York Times notices the Buddha, the enlightened one has already left town.
For example, the word Renaissance got introduced two hundred years after the start of the Renaissance. The game was already over.
The same is true of most major cultural movements—they are truly the elephants in the room. And the elites at the epicenter of power are absolutely the last to notice.
Tiberius may run the entire Roman Empire, but he will never hear the Good News.
There’s a general rule here—the bigger the shift, the easier it is to miss.
We are living through a situation like that right now. We are experiencing a total shift—like the magnetic poles reversing. But it doesn’t even have a name—not yet.
So let’s give it one.
Let’s call it: The Collapse of the Knowledge System.
We could also define it as the emergence of a new knowledge system.
In this regard, it resembles other massive shifts in Western history—specifically the rebirth of humanistic thinking in the early Renaissance, or the rise of Romanticism in the nineteenth century.
In these volatile situations, the whole entrenched hierarchy of truth and authority gets totally reversed. The old experts and their systems are discredited, and completely new values take their place. The newcomers bring more than just a new attitude—they turn everything on its head.
That’s happening right now…
[Gioia unpacks ten signs of this collapse…]
… Why isn’t this discussed openly—in media, in universities, in public discourse? Everything I’ve mentioned above is public knowledge. So why aren’t the experts discussing it?
Well, that’s obvious.
The experts don’t want to admit this is happening because it puts their status at risk. And the same is true of all the organizations and businesses that own and control the knowledge system.
The last thing they want is to call attention to the breakdown.
So they can only address the situation in isolated, disconnected ways. Admitting that these ten symptoms are part of a larger, systemic problem can’t be acknowledged—not under any circumstances.
And that’s why we can’t assume that any quick fix—from politicians or universities—will reverse this decline. We are beyond that stage.
The more important question is this: When the old knowledge hierarchy collapses, what will replace it?
Yes, something will replace it. And I’ve hinted at that in previous articles here—for example, my “Notes Toward a New Romanticism.”
Even as tech gets degrades, people will still need something solid and reliable that will contribute to human flourishing. In fact, they will need that more than ever.
If they can’t get it from Silicon Valley, they will find it elsewhere.
But where?
Let me point out that despite all the manipulations, hallucinations, abuses, and dysfunctional excesses of the digital life…
…Despite all of these, symphonies sound as majestic as ever. Philosophy is more necessary than ever. Paintings are still glorious. Great architecture does not collapse. Nature warms the heart. As do poems and epics and myths.
Jazz still swings. Heroes still prevail. The soul is stirred. And one lover still reaches for another.
I’m not sure what exactly will replace the cold, dying knowledge system. But I suspect it will recognize the value of these things. And will prevail for that very reason…
… before closing, let me make a few more points
- Science and tech will not disappear. But they will face an intense backlash beyond anything we’ve experienced in the last 200 years.
- The people running the tech world fail to grasp this. They think that the next big stage is the Singularity—when everybody lets the technocracy control everything and make every decision. In fact, the exact opposite is about to unfold.
- I’m not suggesting that you can replace tech with a poem or symphony. But tech now desperately needs what can only be provided by the humanities and human values.
- The new knowledge system will be built on these human values. Technology will be forced to serve it—or it will get locked into a losing battle with the new “softer and gentler” knowledge system…
A huge change is coming: “The Ten Warning Signs,” from @tedgioia.bsky.social.
* Bob Dylan
###
As we wonder about ways of knowing, we might recall that it was on this date in 1983 that the Kinks released their 20th studio album, State of Confusion. (The LP features the single “Come Dancing“, which hit number 6 on the Billboard Hot 100 and was one of the band’s biggest hit singles in the United States, equaling the 1965 peak of “Tired of Waiting for You.” The album itself was a major success, peaking at number 12 on the Billboard albums chart.)
“Mathematics is the music of reason”*…
New technologies, most centrally AI, are arming scientists with tools that might not just accelerate or enhance their work, but altogether transform it. As Jordana Cepelewicz reports, mathematicians have started to prepare for a profound shift in what it means to do math…
Since the start of the 20th century, the heart of mathematics has been the proof — a rigorous, logical argument for whether a given statement is true or false. Mathematicians’ careers are measured by what kinds of theorems they can prove, and how many. They spend the bulk of their time coming up with fresh insights to make a proof work, then translating those intuitions into step-by-step deductions, fitting different lines of reasoning together like puzzle pieces.
The best proofs are works of art. They’re not just rigorous; they’re elegant, creative and beautiful. This makes them feel like a distinctly human activity — our way of making sense of the world, of sharpening our minds, of testing the limits of thought itself.
But proofs are also inherently rational. And so it was only natural that when researchers started developing artificial intelligence in the mid-1950s, they hoped to automate theorem proving: to design computer programs capable of generating proofs of their own. They had some success. One of the earliest AI programs could output proofs of dozens of statements in mathematical logic. Other programs followed, coming up with ways to prove statements in geometry, calculus and other areas.
Still, these automated theorem provers were limited. The kinds of theorems that mathematicians really cared about required too much complexity and creativity. Mathematical research continued as it always had, unaffected and undeterred.
Now that’s starting to change. Over the past few years, mathematicians have used machine learning models (opens a new tab) to uncover new patterns, invent new conjectures, and find counterexamples to old ones. They’ve created powerful proof assistants both to verify whether a given proof is correct and to organize their mathematical knowledge.
They have not, as yet, built systems that can generate the proofs from start to finish, but that may be changing. In 2024, Google DeepMind announced that they had developed an AI system that scored a silver medal in the International Mathematical Olympiad, a prestigious proof-based exam for high school students. OpenAI’s more generalized “large language model,” ChatGPT, has made significant headway on reproducing proofs and solving challenging problems, as have smaller-scale bespoke systems. “It’s stunning how much they’re improving,” said Andrew Granville, a mathematician at the University of Montreal who until recently doubted claims that this technology might soon have a real impact on theorem proving. “They absolutely blow apart where I thought the limitations were. The cat’s out of the bag.”
Researchers predict they’ll be able to start outsourcing more tedious sections of proofs to AI within the next few years. They’re mixed on whether AI will ever be able to prove their most important conjectures entirely: Some are willing to entertain the notion, while others think there are insurmountable technological barriers. But it’s no longer entirely out of the question that the more creative aspects of the mathematical enterprise might one day be automated.
Even so, most mathematicians at the moment “have their heads buried firmly in the sand,” Granville said. They’re ignoring the latest developments, preferring to spend their time and energy on their usual jobs.
Continuing to do so, some researchers warn, would be a mistake. Even the ability to outsource boring or rote parts of proofs to AI “would drastically alter what we do and how we think about math over time,” said Akshay Venkatesh, a preeminent mathematician and Fields medalist at the Institute for Advanced Study in Princeton, New Jersey.
He and a relatively small group of other mathematicians are now starting to examine what an AI-powered mathematical future might look like, and how it will change what they value. In such a future, instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist. Mathematics might draw closer to laboratory sciences, or even to the arts and humanities.
Imagining how AI will transform mathematics isn’t just an exercise in preparation. It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for…
Absolutely fascinating: “Mathematical Beauty, Truth, and Proof in the Age of AI,” from @jordanacep.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.
###
As we wonder about ways of knowing, we might spare a thought for a man whose work helped trigger an earlier iteration of this enhance/transform discussion and laid the groundwork for the one unpacked in the article linked above above: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“The number 2 is a very dangerous number: that is why the dialectic is a dangerous process”*…
In order to bridge the yawning gulf between the humanities and the sciences, Gordon Gillespie suggests, we must turn to an unexpected field: mathematics…
In 1959, the English writer and physicist C P Snow delivered the esteemed Rede Lecture at the University of Cambridge [a talk now known as “The Two Cultures,” see here]. Regaled with champagne and Marmite sandwiches, the audience had no idea that they were about to be read the riot act. Snow diagnosed a rift of mutual ignorance in the intellectual world of the West. On the one hand were the ‘literary intellectuals’ (of the humanities) and on the other the (natural) ‘scientists’: the much-discussed ‘two cultures’. Snow substantiated his diagnosis with anecdotes of respected literary intellectuals who complained about the illiteracy of the scientists but who themselves had never heard of such a fundamental statement as the second law of thermodynamics. And he told of brilliant scientific minds who might know a lot about the second law but were barely up to the task of reading Charles Dickens, let alone an ‘esoteric, tangled and dubiously rewarding writer … like Rainer Maria Rilke.’
Sixty-plus years after Snow’s diatribe, the rift has hardly narrowed. Off the record, most natural scientists still consider the humanities to be a pseudo-science that lacks elementary epistemic standards. In a 2016 talk, the renowned theoretical physicist Carlo Rovelli lamented ‘the current anti-philosophical ideology’. And he quoted eminent colleagues such as the Nobel laureate Steven Weinberg, Stephen Hawking and Neil deGrasse Tyson, who agreed that ‘philosophy is dead’ and that only the natural sciences could explain how the world works, not ‘what you can deduce from your armchair’. Meanwhile, many humanities scholars see scientists as pedantic surveyors of nature, who may produce practical and useful results, but are blind to the truly deep insights about the workings of the (cultural) world. In his best-selling book The Fate of Rome (2017), Kyle Harper convincingly showed that a changing climate and diseases were major factors contributing to the final fall of the Roman Empire. The majority of Harper’s fellow historians had simply neglected such factors up to then; they had instead focused solely on the cultural, political and socioeconomic ones…
The divide between the two cultures is not just an academic affair. It is, more importantly, about two opposing views on the fundamental connection between mind and nature. According to one view, nature is governed by an all-encompassing system of laws. This image underlies the explanatory paradigm of causal determination by elementary forces. As physics became the leading science in the 19th century, the causal paradigm was more and more seen as the universal form of explanation. Nothing real fell outside its purview. According to this view, every phenomenon can be explained by a more or less complex causal chain (or web), the links of which can, in turn, be traced back, in principle, to basic natural forces. Anything – including any aspect of the human mind – that eludes this explanatory paradigm is simply not part of the real world, just like the ‘omens’ of superstition or the ‘astral projections’ of astrology.
On the opposing view, the human mind – be it that of individuals or collectives – can very well be regarded separately from its physical foundations. Of course, it is conceded that the mind cannot work without the brain, so it is not entirely independent of natural forces and their dynamics. But events of cultural significance can be explained as effects of very different kinds of causes, namely psychological and social, that operate in a sphere quite separate from that of the natural forces.
These divergent understandings underpin the worldviews of each culture. Naive realists – primarily natural scientists – like to point out that nature existed long before humankind. Nature is ordered according to laws that operate regardless of whether or not humans are around to observe. So the natural order of the world must be predetermined independently of the human mind. Conversely, naive idealists – including social constructivists, mostly encountered in the humanities – insist that all order is conceptual order, which is based solely on individual or collective thought. As such, order is not only not independent of the human mind, it’s also ambiguous, just as the human mind is ambiguous in its diverse cultural manifestations.
The clash of cultures between the humanities and the natural sciences is reignited over and over because of two images that portray the interrelationship of mind and nature very differently. To achieve peace between the two cultures, we need to overcome both views. We must recognise that the natural and the mental order of things go hand in hand. Neither can be fully understood without the other. And neither can be traced back to the other…
… The best mediator of a conciliatory view that avoids the mistake of the naive realist and the naive idealist is mathematics. Mathematics gives us shining proof that understanding some aspect of the world does not always come down to uncovering some intricate causal web, not even in principle. Determination is not explanation. And mathematics, rightly understood, demonstrates this in a manner that lets us clearly see the mutual dependency of mind and nature.
For mathematical explanations are structural, not causal. Mathematics lets us understand aspects of the world that are just as real as the Northern Lights or people’s behaviour, but are not effects of any causes. The distinction between causal and structural forms of explanation will become clearer in due course. For a start, take this example. Think of a dying father who wants to pass on his one possession, a herd of 17 goats, evenly to his three sons. He can’t do so. This is not the case because some hidden physical or psychological forces hinder any such action. The reason is simply that 17 is a prime number, so not divisible by three…
… In his ‘two cultures’ speech, Snow located mathematics clearly in the camp of the sciences. But… mathematics doesn’t adhere to the explanatory paradigm of causal determination. This distinguishes it from the natural sciences. Nevertheless, mathematics tells us a lot about nature. According to Kant, it does so because it tells us a lot about the human mind. Mind and nature are inseparable facets of the world we inhabit and conceive. So, why should the humanities not also count as a science? They can tell us just as much about that one world on a fundamental level as the natural sciences. Mathematics demonstrates this clearly…
… Mathematics undermines the causal explanatory paradigm not only in its natural scientific manifestations, but also in its uses in the humanities. We give explanations for a wide variety of phenomena by hidden causes way too often and way too fast, where the simple admission to having no explanation would not only be more honest, but also wiser. Wittgenstein spoke of the disease of wanting to explain. This disease shows itself not just in our private everyday exchanges and in the usual public debates, but also in scholarly discourse of the humanities. When confronted with individual or collective human thinking and behaviour, it is tempting to assume just a few underlying factors responsible for the thinking and behaviour. But, more often than not, there really is no such neat, analysable set of factors. Instead, there is a vast number of natural, psychological and societal factors that are all equally relevant for the emergence of the phenomenon one wants to explain. Perhaps a high-end computer could incorporate all these factors in a grand simulation. But a simulation is not an explanation. A simulation allows us to predict, but it doesn’t let us understand.
The aim of the humanities should not be to identify causes for every phenomenon they investigate. The rise and fall of empires, the economic and social ramifications of significant technological innovations, the cultural impact of great works of art are often products of irreducibly complex, chaotic processes. In such cases, trying to mimic the natural sciences by stipulating some major determining factors is a futile and misleading endeavour.
But mathematics shows that beyond the causal chaos there can be order of a different kind. The central limit theorem lets us see and explain a common regularity in a wide range of causally very different, but equally complex, natural processes. With this and many other examples of structural mathematical explanations of phenomena in the realm of the natural sciences in mind, it seems plausible that mathematical, or mathematically inspired, abstraction can also have fruitful applications in the humanities.
This is by no means meant to promote an uncritical imitation of mathematics in the humanities and social sciences. (The overabundance of simplistic econometric models, for instance, is a huge warning sign.) Rather, it is meant to motivate scholars in these fields to reflect more upon where and when causal explanations make sense. Complexity can’t always be reduced to a graspable causal explanation, or narrative. To the contrary, often the most enlightening enquiries are not those that propose new factors as the true explainers, but those that show by meticulous analysis that far more factors are crucially in play than previously thought. This, in turn, should motivate scholars to seek aspects of their subject of interest beyond causality that are both relevant and amenable to structural forms of explanation. Besides probability theory, chaos theoretical methods and game theory come to mind as mathematical sub-disciplines with potentially fruitful applications in this regard.
However, the main point of our discussion is not that mathematical applications in the humanities might bridge the gap between the natural sciences and the humanities. The point is that mathematics, not really belonging to either camp, shows them to be on an equal footing from the start. The natural scientific paradigm of explanation is not the role model any respectable form of enquiry has to follow. Mathematics shows that natural causes can’t explain every phenomenon, not even every natural phenomenon and not even in principle. So, there is no need for the humanities, the ‘sciences of the mind’, to always strive for explanations by causes that can be ‘reduced’ to more elementary, natural forces. Moreover, mathematics shows that causality, of any kind, is not the only possible basis on which any form of explanation ultimately has to stand. Take for example the semantic relationships between many of our utterances. It is not at all clear that these can be explained in terms of psychological causes, or any other causes. It is not unreasonable to believe that the world is irreducibly structured, in part, by semantic relations, just as it is structured by probabilistic relations…
… The divide between the natural sciences and the humanities does not stem from the supposed fact that only those mental phenomena are real that are explainable in natural-scientific terms. Nor is the divide due to some extra-natural mental order, determined by causal relationships of a very different kind than those studied in the natural sciences. The mental world and the physical world are one and the same world, and the respective sciences deal with different aspects of this one world. Properly understood, insofar as they deal with the same phenomena, they do not provide competing but complementary descriptions of these phenomena.
Mathematics provides the most impressive proof that a true understanding of the world goes beyond the discovery of causal relationships – whether they are constituted by natural or cultural forces. It is worth taking a closer look at this proof. For it outlines the bond that connects mind and nature in particularly bright colours. Kant understood this bond as a ‘transcendental’ one. The late Wittgenstein, on the other hand, demonstrated its anchoring in language – not in the sense of a purely verbal and written practice, but in the sense of a comprehensive practice of actions the mental and bodily elements of which cannot be neatly separated. In the words of Wittgenstein, ‘commanding, questioning, recounting, chatting are as much a part of our natural history as walking, eating, drinking, and playing.’
Mathematics too is part of this practice. As such, like every science, it is inseparably rooted in both nature and the human mind. Unlike the other sciences, this dual rootedness is obvious in the case of mathematics. One only has to see where it resides: beyond causality.
Uniting the “Two Cultures”? “Beyond Causality” in @aeon.co.
* C. P. Snow, The Two Cultures and the Scientific Revolution
###
As we come together, we might send carefully calculated birthday greetings to a man with a foot in each culture: Frank Plumpton Ramsey; he was born on this date in 1903. A philosopher, mathematician, and economist, he made major contributions to all three fields before his death (at the age of 26) on this date in 1930.
While he is probably best remembered as a mathematician and logician and as Wittgenstein’s friend and translator, he wrote three paper in economics: on subjective probability and utility (a response to Keynes, 1926), on optimal taxation (1927, described by Joseph E. Stiglitz as “a landmark in the economics of public finance”), and optimal economic growth (1928; hailed by Keynes as “”one of the most remarkable contributions to mathematical economics ever made”). The economist Paul Samuelson described them in 1970 as “three great legacies – legacies that were for the most part mere by-products of his major interest in the foundations of mathematics and knowledge.”
For more on Ramsey and his thought, see “One of the Great Intellects of His Time,” “The Man Who Thought Too Fast,” and Ramsey’s entry in the Stanford Encyclopedia of Philosophy.
“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…
Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…
Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.
Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.
To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).
Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.
To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.
In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…
… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.
We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.
I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.
But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…
While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.
This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.
The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…
… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.
But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.
With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).
Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.
So then what’s the take-away?
For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:
It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.
What a lovely new concern for parents we’ve invented!
Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.
Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.
For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.
In other words, pro-human bias and AI distrust are cognitively beneficial.
It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.
The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.
Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?“
* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b
###
As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861. Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.
But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).
“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”
“We need to take information, wherever it is stored, make our copies and share them with the world”*…

Long-time readers of (R)D will know of your correspondent’s affection for– and commitment to– archives and archiving: see, e.g., here, here, here, here, or here. As the new administration is sytematically scrubbing government websites of public data and threatening the National Archive, it’s a painfully-timely concern.
Digital pioneer Mark Pesce weighs in with a reminder that our archiving efforts should be broad– and that we shouldn’t neglect the personal…
When moving house a few months back I found several heavy plastic tubs that, upon inspection, I saw contained my life’s work in print. They were full of articles, magazines, books and book chapters.
That informal archive represents only a small portion of my total output. I’ve been writing on and for the web pretty much since it came into existence outside of CERN, so have more than 30 years’ worth of material online.
Those plastic tubs are therefore a proverbial iceberg that represent perhaps a tenth of my output, the rest of which is submerged on networks.
I had wanted to write about how to make our invisible digital lives more visible; then two horrible events – one personal, the other of global significance – reset my compass.
Earlier this month I lost my good friend Tony Kastanos to lung cancer. I’d always known him as an artist – musician, painter, provocateur – but it wasn’t until he was gone that I learned from his collaborators that he’d also released three albums of electronic music, produced with collaborator Tim Gruchy, who showed me how to find it on iTunes and Spotify.
I’d known Tony for two decades, but he’d never told me about his electronica work. Nor had he told me about his award-winning stop-motion video animation, Amerika Amerika.
Tim wondered aloud how to ensure that their collaborations would continue to be available. It’s an essential question confronting any creative talent working in the digital era: How do we continue to offer our contributions to the generations that follow, when we’re no longer around to spruik them?
The Internet Archive has a pivotal role to play here – not just because of its immunity to the commercial mutability of a Spotify or an Apple Music, but because its very existence and name imply a promise to maintain a long-term archive of all online creative works. Tim – and all of Tony’s other collaborators – could be putting copies of all their works into a Tony-Kastanos-archive-within-The-Archive. If that happens, my friend won’t disappear completely.
Half an hour after I’d learned of Tony’s passing, a friend in Los Angeles sent me a long, harrowing text message expressing fear the fires battering the city could claim their home.
A week later, they were relieved to find their home intact – but many others did not.
Within a few days, a story began to circulate about one of the structures that did not survive: The building housing the archive of the Theosophical Society.
A century ago, Theosophists stood at the forefront of what today we’d call the “New Age” movement. Although the society’s star has dimmed in the decades since, their influence on religion, philosophy and culture remains profound. Their archive housed most the papers and correspondence of the founders and main movers of the Theosophical Society – its genesis and history.
As Errol Morris has said, “People can burn archives; people can destroy evidence, but to say that history is perishable, that historical evidence is perishable, is different than saying that history is subjective.” The best defense is wide distribution (per the full Aaron Swartz quote, below).
Where are the comprehensive archives to protect digital works, or allow us to memorialize friends? “Memories fade. Archives burn. All signal eventually becomes noise,” from @mpesce.arvr.social.ap.brid.gy in @theregister.com.
See also: “Century-Scale Storage” from Maxwell Neely-Cohen and the Library Innovation Lab at Harvard Law School
Oh, and now is a good time to visit– and support– the Internet Archive.
* “We need to take information, wherever it is stored, make our copies and share them with the world. We need to take stuff that’s out of copyright and add it to the archive. We need to buy secret databases and put them on the Web. We need to download scientific journals and upload them to file sharing networks… With enough of us, around the world, we’ll not just send a strong message opposing the privatization of knowledge – we’ll make it a thing of the past.” — Aaron Swartz
###
As we prioritize protection, we might recall that it was on this date in 1497 that Dominican friar and populist agitator Girolamo Savonarola, having convinced the citizenry of Florence to expel the Medici and recruited the city-state’s youth in a puritanical campaign, presided over “The Bonfire of the Vanities,” the public burning of art works, books, cosmetics, and other items deemed to be vessels of personal aggrandizement. Many art historians, relying on Vasari‘s account, believe that Botticelli, a partisan of Savonarola, consigned several of his paintings to the flames (and then “fell into very great distress”). Others are not so certain. In any case, it seems sure that the fire consumed works by Fra Bartolomeo, Lorenzo di Credi, and many other painters, along with books by Boccaccio, manuscripts of secular songs, a number of statues, and other antiquities.








You must be logged in to post a comment.