Posts Tagged ‘agency’
“Don’t throw the baby out with the bath water”*…

From Dynomight (and here), an argument that algorithms, while problematic today, aren’t necessarily evil…
What does “algorithmic ranking” bring to mind for you? Personally, I get visions of political ragebait and supplement hucksters and unnecessary cleavage. I see cratering attention spans and groups of friends on the subway all blankly swiping at glowing rectangles. I see overconfident charlatans and the hollow eyes eyes of someone reviewing 83 photo she just made her boyfriend take of her in front of a sunset. Most of all, I see dreams of creative expression perverted into a desperate scramble to do whatever it takes to please the Algorithm.
Of course, lots of people like algorithmic ranking, too.
I theorize that the skeptics are right and algorithmic ranking is in fact bad. But it’s not algorithmic ranking per se that’s bad—it’s just that the algorithms you’re used to don’t care about your goals. That might be an inevitable consequence of “enshittification”, but the solution isn’t to avoid all algorithms, but just to avoid algorithms you can’t control. This will become increasingly important in the future as algorithmic ranking becomes algorithmic everything…
Dynomight elaborates on the problem, its genesis, and a plausible answer: “Algorithmic ranking is unfairly maligned,” from @dynomighty.bsky.social.
###
As we rethink rankings, we might recall that on this date in 1969 a group at the top of most lists took it to the roof: The Beatles performed on the rooftop of their Apple Corps headquarters at 3 Savile Row, in central London’s office and fashion district. Joined by guest keyboardist Billy Preston, the band played a 42-minute set before the Metropolitan Police arrived and ordered them to “reduce the volume.” It was the final public performance of their career. The concert ended with “Get Back,” after which John Lennon quipped, “I’d like to say thank you on behalf of the group and ourselves, and I hope we’ve passed the audition.”
The full concert footage is available at the invaluable Internet Archive. Here, a taste of “Get Back”…
“I fear the day when the technology overlaps with our humanity. The world will only have a generation of idiots.”*…
Alva Noë on the importance of humans hanging on to their humanity– for all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness…
[Starting with Turing, Noë considers the relative roles of humans and technology across a number of spheres, including music…]
… The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt one’s posture, hands, fingers, legs and feet to the piano’s mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.
But we can’t. And we won’t. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrument’s demands.
And this fact – the difficulty we encounter in the face of the keyboard’s insistence – is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.
For it is the player’s fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machine’s demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control – a mechanism – but rather an opportunity for action and expression.
And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back…
… The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.
The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – i.e., merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
We can’t help doing this; no computer can do this…
Eminently worth reading in full: “Rage against the machine,” from @alvanoe in @aeonmag.
For more, see Noë’s The Entanglement: How Art and Philosophy Make Us What We Are.
* Albert Einstein
###
As we resolve to wrestle, we might recall that it was on this date in 1969 that UCLA professor Leonard Kleinrock (aided by his student assistant Charley Kline) created the first networked computer-to-computer connection (with SRI programmer Bill Duvall in Palo Alto), via which they sent the first networked computer-to-computer communication)… or at least part of it. Duvall’s machine crashed partway through the transmission, meaning the only letters received from the attempted “login” were “lo.” The next month two more nodes were added (UCSB and the University of Utah) and the network was dubbed ARPANET.
Still, “lo”– perhaps an appropriate way to announce what would grow up to be the internet.

“Real generosity towards the future lies in giving all to the present”*…
Iwan Rhys Morus suggests that we’re enthralled to a Victorian paradigm that haunts us still: the idea that inventors and entrepreneurs hold the keys to the utopian future…
Tech titans like Elon Musk and Jeff Bezos present themselves as men who could single-handedly shape the future. For their supporters, their ruthless drive toward success is their key virtue. And their showmanship — Musk sending a Tesla Roadster into space on a Falcon Heavy rocket, or Bezos sending Captain Kirk into orbit with Blue Origin — is a way of demonstrating that virtue and asserting they are in control.
We owe to the Victorians the idea that there is a firm link between virtue and technological agency. They established a powerful paradigm that continues to haunt us: that the future is (or can be) a utopia, and inventors and entrepreneurs are the ones who know how to get there.
While our notions of virtue have shifted today, we still assume that future-making is the prerogative of very specific sorts of innovators — even as their imagined identities have fractured and transformed. The assumption that innovation is the property of charismatic individuals still underlies the way we think about technology.
…
The seductive power of Victorian thinking about the relationship between character, technology, and the future remains pervasive, even if views about just what the proper character of the inventor should be have shifted….
With its focus on individual virtue, the Victorian vision of the future is an exclusive one. When we subscribe to this paradigm about how — and by whom — the future is made, we’re also relinquishing control over that future. We’re acknowledging that tomorrow belongs to them, not to us.
“Back To The Victorian Future,” by @irmorus1 in @NoemaMag. Eminently worth reading in full.
* Albert Camus
###
As we ponder power and its purpose, we might send inclusive birthday greetings to Jacques Lucien Monod; he was born on this date in 1910. A biochemist, he shared (with with François Jacob and André Lwoff) the Nobel Prize in Physiology or Medicine in 1965, “for their discoveries concerning genetic control of enzyme and virus synthesis.”
But Monod, who became the director of the Pasteur Institute, also made significant contributions to the philosophy of science– in particular via his 1971 book (based on a series of his lectures) Chance and Necessity, in which he examined the philosophical implications of modern biology. The importance of Monod’s work as a bridge between the chance and necessity of evolution and biochemistry on the one hand, and the human realm of choice and ethics on the other, can be seen in his influence on philosophers, biologists, and computer scientists including Daniel Dennett, Douglas Hofstadter, Marvin Minsky, and Richard Dawkins.



You must be logged in to post a comment.