Posts Tagged ‘learning’
“The web of our life is of a mingled yarn”*…
In what does our personhood consist? From what/where does it come? João de Pina Cabral unpacks the seminal thinking of Lucien Lévy-Bruhl and the advances in cognitive science and developmental psychology that suggest that a person is not self-contained, but the outcome of a lifelong process of living with others…
It matters to understand what constitutes a person. After all, if there is one feature that distinguishes human society from other forms of sociality, it is that, at around one year of age, most human beings attain personhood: they learn to speak a language, develop object permanence – the understanding that things do not disappear when out of sight – and relate to others in consciously moral ways. Should all persons be accorded the same rights and duties by virtue of this condition? These are weighty questions that have occupied social scientists and philosophers since antiquity – particularly at moments such as the present, when war and imperial oppression once again raise their ugly heads.
Nevertheless, this question cannot be approached as a purely moral matter, for in order to determine what rights and duties may be attributed to persons, it is necessary to establish what persons are. This longstanding perplexity can now be addressed in increasingly sophisticated ways, following a century of sustained anthropological enquiry.
In September 1926, two of the most eminent anthropologists of the day met in person for the first time in New York. Both were Jewish and born in Europe, but one – Franz Boas – had become an American citizen and was a leading figure at Columbia University in New York, while the other – Lucien Lévy-Bruhl – was a professor in Paris. Both were highly learned, humanistically inclined and politically liberal; they respected one another, yet they did not seem to agree about the matter of the person.
Lévy-Bruhl had begun his career as a philosopher of ethics. His doctoral thesis focused on the legal concept of responsibility. He was struck by the fact that responsibility first arose between persons not as a law, but as an emotion – a deep-seated feeling. He argued that co-responsibility implies a bond between persons grounded less in reason than in the conditions of their emergence as persons. As children, individuals do not emerge out of nothing, but through deep engagement with prior persons – their caregivers. Thus, moral responsibility could not have arisen from adherence to norms or rules; rather, norms and rules emerged from the sense of responsibility that humans acquire as they become persons.
This led him to question how we become thinking beings. Do all humans, after all, think in the same way? He began reading the increasingly sophisticated ethnographic accounts emerging from Australia, Africa, Asia and South America, and was deeply influenced by an extended trip to China. He was an empirical realist, but also a personalist – that is, he accorded primacy to the person as such, refusing to subsume the individual into the group. In this respect, he was not persuaded by the arguments of the great sociologist Émile Durkheim concerning the exceptional status of the ‘sacred’ or the special powers of ‘collective consciousness’. Lévy-Bruhl soon arrived at a striking conclusion: in their everyday practices and especially in their ritual actions, the so-called ‘primitive’ peoples studied by ethnographers did not appear to conform to the norms of logic that had been regarded as universally valid since the time of Aristotle.
As a friend of his put it, Lévy-Bruhl discovered that such peoples are characterised by ‘a mystical mentality – full of the “supernatural in nature” and prelogic, of a different kind than ours’. Indeed, the basic principles of Aristotelian logic that continue to guide scientific thinking – underpinning modern technological development – seemed to be ignored by premodern peoples. Aristotle’s law of the excluded middle (p or not-p) did not appear to apply to their ‘mystical’ modes of thought, both because they tended to think in terms of concrete objects rather than abstractions, and because they exhibited what Lévy-Bruhl termed ‘participation’…
[de Pina Cabral traces the development of Lévy-Bruhl’s thought, starting with Plato’s concept of methexis; elaborates on Lévy-Bruhl’s ideas; and traces te advances in cognitive science and developmental psychology that support them…]
… the very experience of personhood – that is, the sense that I am myself – is not ‘individual’, since its emergence presupposes a prior condition of being-with others. The self arises from a sharing of being with others, from having been part of those who are close to us. One does not emerge as an addition to society, but rather as a partial separation from the participations that initially constituted one’s being.
As I become a person, I learn to relate to myself as an other; I transcend my immediate position in the world. Without this, I would not be able to speak a language, since the use of pronouns presupposes reflexive thought. Thus, as Lévy-Bruhl had already insisted in his notebooks, participation precedes the person. Intersubjectivity is not the meeting of already constituted subjects, but the ground from which subjectivity emerges. Participation, therefore, may be understood as the constitutive tension between the singular and the plural in the formation of the person in the world. In 1935, the great phenomenologist Edmund Husserl expressed this insight clearly in a letter to Lévy-Bruhl where he thanked him for his ideas on participation:
Saying ‘I’ and ‘we’, [persons] find themselves as members of families, associations, [socialities], as living ‘together’, exerting an influence on and suffering from their world – the world that has sense and reality for them, through their intentional life, their experiencing, thinking, [and] valuing.
In acting and being acted upon together in human company during the first year of life, children become ‘we’ at the same time as they become ‘I’, which means that persons are always, ambivalently, both ‘I’ and ‘we’. Participation and transcendence will remain sources of theoretical perplexity for as long as the ‘we’ is approached as a categorical matter – a question of ‘identity’ – rather than as the presence and activity of living persons in dynamic interaction with the world and with one another.
By contrast, once we accept that personhood is the outcome of a process – the encounter between the embodied capacities of human beings and the historically constituted world that surrounds them – participation loses its mystery. As Lévy-Bruhl put it in one of his final notes: ‘The impossibility for the individual to separate within himself what would be properly him and what he participates in in order to exist …’ Participation, therefore, is the ground upon which everyday social interaction is constituted. The ‘mystical’ (or transcendental) potential within each of us – that which animates the symbolic life of groups – is part of the very process through which each of us becomes ourselves…
How does one become a person? “We” before “I”: “To be is to participate,” from @aeon.co.
A (if not the) next question: how does personhood emerge when the formative interactions are increasingly mediated/attentuated by technology?
* Shakespeare, All’s Well That Ends Well, Act 4, Scene 3
###
As we get together, we might send behaviorist birthday greetings to a man whose work focused on how one might train the “persons” who emerge: Kenneth Spence; he was born on this date in 1907. A psychologist, he worked to construct a comprehensive theory of behavior to encompass conditioning and other simple forms of learning and behavior modification.
Spence attempted to establish a precise, mathematical formulation to describe the acquisition of learned behavior, trying to measure simple learned behaviors (e.g., salivating in anticipation of eating). Much of his research focused on classically conditioned, easily measured, eye-blinking behavior in relation to anxiety and other factors.
One of the leading theorists of his time, Spence was the most cited psychologist in the 14 most influential psychology journals in the last six years of his life (1962 – 1967). A Review of General Psychology survey, published in 2002, ranked Spence as the 62nd most cited psychologist of the 20th century.
“Don’t eat your seed corn”*…
AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…
We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.
The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.
But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.
So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.
Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…
[Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]
… What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.
Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.
The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”
That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…
[Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]
… If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.
This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…
[Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]
… The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy. It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.
By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.
Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.
The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…
… The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.
The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.
The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.
Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.
None of these individual acts is catastrophic. However, their compound effect may be.
The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.
Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.
In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.
The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…
Eminently worth reading in full: “The Social Edge of Intelligence.”
Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“
Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.
And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”
* Old agricultural proverb
###
As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics.
The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.
“The mind is not a vessel to be filled, but a fire to be kindled”*…
(Roughly) Daily is, in effect, a kind of notebook, a commonplace book. So it will be no surprise that your correspondent found today’s featured piece fascinating.
Jillian Hess, a professor who studies the history of note-taking, shares the lessons she took from her review of the papers of the remarkable Richard Feynman…
Formal education, at its best, prepares us for a life of learning. After all, we are only in school for a fraction of our lives and there is so much to learn!
Richard Feynman (1918-1988) understood the value of self-education. He was a Nobel Prize-winning theoretical physicist, a member of the Manhattan Project at the age of 25, and a dynamic public intellectual who never stopped learning.
Often touted as one of history’s greatest learners, Feynman taught himself a dizzying amount of science. I wanted to see his notes for myself—to observe the great autodidact thinking on the page. So, I visited his archives at Caltech in February…
… In the archives, I saw… for myself: Feynman’s notebooks contain imprints of thinking in real-time—the work as it happened. They were instruments for thinking through uncertainty.
What follows is a list of note-taking principles for self-education that I gathered while studying Feynman’s notebooks.
Start with First Principles: Feynman’s “Things I Don’t Know About” Notebook
Discussions about Feynman’s learning process usually draw from this notebook, which he compiled as a Ph.D. student at Princeton. The contents include mechanics, mathematical methods, and thermodynamics. Clearly, he knew something about these topics, but he found his understanding superficial. So, his response was to take the subject apart—to break it down into “the essential kernels” …
[Hess illustrates this principle, then unpacks two others: “create a reading index” and “keep learning.” She continues…]
… Uncertainty is Interesting
This is my biggest takeaway: We should fear certainty more than doubt. Learning to live with uncertainty is an essential aspect of learning, as Feynman said in 1981:
You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong.
And then, in an echo of his “Notebook of Things I Know Nothing About,” compiled four decades prior, he adds:
…I’m not absolutely sure of anything, and there are many things I don’t know anything about.
If a man as celebrated for his genius as Feynman felt that way, certainly the rest of us have a lot more to learn…
[And she concludes…]
… Notes on Feynman’s Notes:
Use notes to think: Feynman didn’t think through problems in his head and then turn to his notebooks. Instead, he used his notebooks to think through problems. His thought process required paper.
Start with first principles: “Why” is a very powerful question. And asking why can lead us back to the fundamentals and help us understand them in an entirely new light. This applies to any subject. Feynman has helped me think of note-taking as a kind of expedition. Use your notes to dig deeper into topics you think you already understand.
Never stop learning: How wonderful would it be if we could hold onto the excitement of learning we had as children? After all, the world didn’t get less interesting. It’s worth returning to the note-taking methods you used in school to see if they are still useful in adulthood. I particularly like Feynman’s high school method of taking 30 minutes to understand a subject before he allowed himself to take notes on it.
[Then leaves us with the man himself, “in all his radiant, enthusiastic, brilliance”…]
On “Richard Feynman’s Notes For Self-Education.”
Pair with: “Curiosity Is No Solo Act“: “it gains its real power when embedded in webs of relationship and shared meaning-making”… something that Feynman’s life also demonstrated (as you can see in his autobiography and/or in James Gleick‘s biography, Genius)
* Plutarch
###
As we light that fire, we might spare a thought for Jeremy Bernstein; he died on this date last year. A physicist who woked on nuclear propulsion for Project Orion and held research and teaching positions at Stevens Institute of Technology, the Institute for Advanced Study, Brookhaven National Laboratory, CERN, Oxford University, University of Islamabad, and École Polytechnique, he is better remembered as a gifted popular science writer and profiler of scientists.
Bernstein wrote 30 books, and scores of magazine articles for “general readers”– for The New Yorker, where he was a staff writer from 1961 to 1995, and for The Atlantic Monthly, the New York Review of Books, and Scientific American, among others.
Of Feynman, Bernstein wrote “[his] Mozartean genius in physics seemed to be combined with an almost equally Mozartean urge to play the clown.” (in which, of course, Feynman was in the good company of Einstein, Claude Shannon, and others :-)
“You live and learn. At any rate, you live.”*…
… and to the extent that we care about our democracy, that’s an issue.
In an article based on his recent Sakurada-Kai Foundation Oxbridge Lecture at Keio University, Tokyo, John Dunn argues that our democracies depend on our picking up the pace of learning. The abstract:
There cannot be a coherent democratic theory because democracy is not a determinate topic. Representative democracy is a relatively modern regime form. It now needs rehabilitation because so many instances have performed poorly for so long. Representative democracy is now also an aging regime. As a type of state, it is subject to the territorial contentiousness and contested legitimacy of any state. It claims its legitimacy from iterative popular choice, but the plausibility of that claim is increasingly strained by the drastic disparities in life chances reproduced through the property systems it protects. The inherent difficulty for citizens to judge how to advance their collective interests is aggravated by the recent transformation of the information economy. In the cumulative damage inflicted by climate change it faces a deadlier peril than any previous regime and one which only a citizenry that can enlighten itself in time can reasonably hope to nerve itself to meet…
There follows a fascinating– and provocative– elaboration of this thesis in which Dunn considers the history of democracy and the alternatives with which it has, since its inception, vied. He concludes in a bracing fashion…
… The varieties of autocracy which will be on offer wherever the rest of the world has the opportunity to take them up will be without exception the reverse of enlightened – instrumentally and compulsively bound to the extremes of obscurantism, Darkness as a full-on fideist commitment, deliberate self-blinding as a navigational strategy. Move fast, break lots, and never pause to inspect the wreckage.
Representative democracy has recently proved itself a poor structure for collective enlightenment, but the case for it depends on its at least not precluding that, its being still open to making the attempt, and responding to what it can contrive to learn. The most optimistic vision of democracy in action has always seen it as an opportunity for collective self-education on the content of shared goods and the means to achieve them. If that is scarcely a realist picture of what it has ever been, at least it is an image of the right shape. It is too late to ask who will educate the educators. At this point we must educate ourselves together and heed the lessons of that education or we must and will die – not just each of us one by one, as we were always fated to do, but soon enough all of us and for ever…
Eminently worth reading in full: “Can Democracy be Rehabilitated?“
Apposite: “How American Democracy Fell So Far Behind,” from Steven Levitsky and Daniel Ziblatt (gift article– and source of the image above)
* Douglas Adams, Mostly Harmless
###
As we devote ourselves to democracy, we might spare a thought for Ludwig van Beethoven; he died on this date in 1827. A crucial figure in the transition between the Classical and Romantic eras in Western music, he remains one of the most famous and influential of all composers. His best-known compositions include 9 symphonies, 5 concertos for piano, 32 piano sonatas, and 16 string quartets. He also composed other chamber music, choral works (including the celebrated Missa Solemnis), a single opera (Fidelio), and numerous songs.
Relevantly to the piece above…
Beethoven admired the ideals of the French Revolution, so he dedicated his third symphony to Napoleon Bonaparte… until Napoleon declared himself emperor. Beethoven then sprung into a rage, ripped the front page from his manuscript and scrubbed out Napoleon’s name…
Beethoven’s temper and Symphony No. 3 ‘Eroica’

“Tell me to what you pay attention and I will tell you who you are”*…

Before the attention economy consumed our lives, “pursuit tests” devised by the US military coupled man to machine with the aim of assessing focus under pressure. D. Graham Burnett explores these devices for evaluating aviators, finding a pre-history of the laboratory research that has relentlessly worked to slice and dice the attentional powers of human beings…
We worry about our attention these days — nearly all of us. There is something. . . wrong. We cannot manage to do what we want to do with our eyes and minds — not for long, anyway. We keep coming back to the machines, to the screens, to the notifications, to the blinking cursor and the frictionless swipe that renews the feed.
An ethnographer from Mars, moving among us (would we even notice?), might have trouble understanding our complaint: “Trouble with their attention? They stare at small slabs of versicolor glass all day! Their attentive powers are. . . sublime!”
And that misunderstanding rather sharpens the point: we don’t have any problem at all with the forms of attention that involve remaining engaged with, and responsive to, machines. We are amazing at the click and tap of durational vigilance to this or that stimulus, presented at the business end of a complex device. Our uncanny and immersive cybernetic attention is a defining characteristic of the age. Our human attention — our ability to be with ourselves and with others, our ability to receive the world with our minds and senses, our ability to daydream, read a book uninterrupted, or watch a sunset — well, many of us are finding it increasingly difficult to remember what that might even mean.
This isn’t really an accident. Over the last century or so, a series of elaborate programs of laboratory research have worked to slice and dice the attentional powers of human beings. Their aim? To understand the operational capacities of those who would be asked to shoot down airplanes, monitor radar screens, and otherwise sit at the controls of large and expensive machines. Seated in front of countless instruments, experimental subjects were asked to listen and look, to track and trigger. Psychologists stood by with stopwatches, quantifying our cybernetic capacities, and seeking ways to extend them. For those of us who have come of age in the fluorescence of the “attention economy”, it is interesting to look back and try to catch glimpses of the way that the movement of human eyeballs came under precise scrutiny, the way that machine vigilance became a field of study. We know now that the mechanomorphic attention dissected in those laboratories is the machine attention that is relentlessly priced in our online lives — to deleterious effects.
You could say that this process began with the fascinating and now mostly forgotten tool known as the “pursuit test”. Part steampunk videogame, part laboratory snuff-flick, the pursuit test staged and restaged the integration of man and machine across the first decades of the twentieth century…
Fascinating– and timely: “Cybernetic Attention– All Watched over by Machines We Learned to Watch,” from @publicdomainrev.bsky.social. Eminently worth reading in full.
* Jose Ortega y Gasset
###
As we untangle engagement, we might send thoughtful birthday greetings to a man whose work influenced the endeavors described in the piece featured above, Hermann Ebbinghaus; he was born on this date in 1850. A psychologist, he pioneered the experimental study of memory and discovered the learning curve, the forgetting curve, and the spacing effect.








You must be logged in to post a comment.