Posts Tagged ‘tools’
“I fear the day when the technology overlaps with our humanity. The world will only have a generation of idiots.”*…
Alva Noë on the importance of humans hanging on to their humanity– for all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness…
[Starting with Turing, Noë considers the relative roles of humans and technology across a number of spheres, including music…]
… The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt one’s posture, hands, fingers, legs and feet to the piano’s mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.
But we can’t. And we won’t. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrument’s demands.
And this fact – the difficulty we encounter in the face of the keyboard’s insistence – is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.
For it is the player’s fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machine’s demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control – a mechanism – but rather an opportunity for action and expression.
And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back…
… The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.
The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – i.e., merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
We can’t help doing this; no computer can do this…
Eminently worth reading in full: “Rage against the machine,” from @alvanoe in @aeonmag.
For more, see Noë’s The Entanglement: How Art and Philosophy Make Us What We Are.
* Albert Einstein
###
As we resolve to wrestle, we might recall that it was on this date in 1969 that UCLA professor Leonard Kleinrock (aided by his student assistant Charley Kline) created the first networked computer-to-computer connection (with SRI programmer Bill Duvall in Palo Alto), via which they sent the first networked computer-to-computer communication)… or at least part of it. Duvall’s machine crashed partway through the transmission, meaning the only letters received from the attempted “login” were “lo.” The next month two more nodes were added (UCSB and the University of Utah) and the network was dubbed ARPANET.
Still, “lo”– perhaps an appropriate way to announce what would grow up to be the internet.

“We are as gods and might as well get good at it”*…
In 1968, Stewart Brand and small group of colleagues published the first Whole Earth Catalog, then followed it over the years with a series of updates, spin-offs, and sequels. An at-the-time unprecedented marriage of counterculture magazine and product catalog, it (and its successors) have been enormously influential. Now, as Long Now‘s Jacob Kupperman reports, the entire run of Whole Earth publications is freely available online…
When the Whole Earth Catalog arrived in the Fall of 01968, it came bearing a simple, epochal label: “Access to Tools.” As its editor and Long Now Co-founder Stewart Brand wrote in the introduction to that first edition, the goal was for the Catalog to serve as an “evaluation and access device” for tools that empowered its readers “to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested.”
The key word in all of that idealistic declaration of purpose was “access.” The Whole Earth Catalog did not intend to directly grant its readers this knowledge, wisdom, and mastery, but to provide a kaleidoscopic array of gateways from which they could attempt to find it themselves.
Yet for years, access to the Whole Earth Catalog itself has been difficult. 55 years on from the first publication of the Catalog, it mostly lives on in the interstices — as a symbol of a vibrant countercultural history and an inspiration for writers, designers, and technologists, but less so as an actual set of catalogs that you can read. The Catalog is not lost media per se — copies can be found in libraries, archives, and personal collections across the world — but accessing its trove of information is no longer as easy as it was in its heyday.
That is, until now.
on the 55th anniversary of the publication of the original Whole Earth Catalog, Gray Area and the Internet Archive have made the Catalog freely available online via the Whole Earth Index, a website bringing together more than 130 Whole Earth Catalog-related publications, ranging from some of the earliest Catalogs published in the late 01960s and early 01970s to 02002 issues of Whole Earth Magazine.
Within the site’s grid of publications rests a cornucopia of writing and curation, from in-depth looks at space colonies to ecological analyses of the insurance industry to reporting on the state of the global teenager at the turn of the 01990s. The Whole Earth Index is a work of love, a noncommercial enterprise designed, as project lead and Gray Area Executive Director Barry Threw told Long Now Ideas, to “allow us to reflect on how we got to where we are and regain some of that connection to the countercultural world” of the Bay Area of the 01960s and 01970s.
…
For the people who helped make the Whole Earth Catalog and its descendants, the Whole Earth Index is in many ways a dream come true. Long Now Board Member Kevin Kelly, who wrote for, edited, and led the CoEvolution Quarterly, the Whole Earth Review, and later editions of the Whole Earth Catalog, told us that he found “the interface to this historic collection to be as good, maybe even better, as reading the original paper artifacts,” adding that he’d “been giddy with delight in how satisfying this archive is.” The project’s model of “instant access from your home, for free!”, Kelly noted, was something that the team behind the Whole Earth Catalog could only dream of when they began their work.
The open-ended design of the Whole Earth Index is intended as a sort of provocation towards future works — a message and invitation in the spirit of the original catalog’s epochal claim that “we are as gods and might as well get good at it.” The tens of thousands of scanned pages will live on the servers of the Internet Archive — as good a place as any to try and stave off a Digital Dark Age — but the ideas of the Whole Earth Catalog and its heirs will always live among those of us who read it and access its tools. What will you do with them?
The Whole Earth Catalog and its descendants are newly available online through the Whole Earth Index: “The Lasting Whole Earth Catalog,” from @Jacobkupp and @longnow.
* Stewart Brand, in the “Statement of Purpose” in the first Whole Earth Catalog
###
As we treasure tools, we might spare a thought for a man whose work kicked in about the same time as the Whole Earth Catalog– and intersected with it in myriad ways (e.g., The WELL), Jon Postel; he died on this date in 1998. A computer scientist, he played a pivotal role in creating and administering the Internet. As a graduate student in the late 1960s, he was instrumental in developing ARPANET, the forerunner of the internet. He is known principally for being the Editor of the Request for Comment (RFC) document series from which internet standards emerged, for Simple Mail Transfer Protocol (SMTP), and for founding and administering the Internet Assigned Numbers Authority (IANA) until his death.
During his lifetime he was referred to as the “god of the Internet” for his comprehensive influence; Postel himself noted that this “compliment” came with a barb, the suggestion that he should be replaced by a “professional,” and responded with typical self-effacing matter-of-factness: “Of course, there isn’t any ‘God of the Internet.’ The Internet works because a lot of people cooperate to do things together.”
“Without reflection, we go blindly on our way”*…
… or at least sociopathic. Indeed, Evgeny Morozov suggests, we may be well on our way. There may be versions of A.G.I. (Artificial General Intelligence) that will be a boon to society; but, he argues, the current approaches aren’t likely to yield them…
… The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.
A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting — some say impossible — task. But the benefits appear truly tantalizing.
Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry — without ever being programmed to do these things.Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run.Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”
This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral. They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.
Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.
Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation…
… the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how…
[There follows a bracing run-down…]
… Margaret Thatcher’s other famous neoliberal dictum was that “there is no such thing as society.”The A.G.I. lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.
But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.
It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem — hence the excitement about A.G.I.
However, if A.G.I.-ism really is neoliberalism by other means, then we should be ready to see fewer — not more — intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. A.G.I.’s grand project of amplifying intelligence may end up shrinking it.
Because of such solutionist bias, even seemingly innovative policy ideas around A.G.I. fail to excite. Take the recent proposal for a “Manhattan Project for A.I. Safety.” This is premised on the false idea that there’s no alternative to A.G.I.But wouldn’t our quest for augmenting intelligence be far more effective if the government funded a Manhattan Project for culture and education and the institutions that nurture them instead?
Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for A.G.I. start-ups, reinforcing the falsehood that society doesn’t exist…
If it’s true that we shape our tools, then our tools shape us, then it behooves us to be very careful as to how we shape them… Eminently worth reading in full: “The True Threat of Artificial Intelligence” (gift link) from @evgenymorozov in @nytimes.
Apposite: on the A. I. we currently have: “The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con,” from @baldurbjarnason.
[Image above: source]
* Margaret J. Wheatley
###
As we set aside solutionism, we might we might send thoroughly-organized birthday greetings to Josiah Wedgwood; he was born on this date in 1730. An English potter, businessman (he founded the Wedgwood company), and inventor (he designed the company’s process machinery and high-temperature beehive-shaped kilns), he is credited, via his technique of “division of labor,” with the industrialization of the manufacture of pottery– and via his example, much of British (and thus American) manufacturing. Wedgwood was a member of the Lunar Society, the Royal Society, and was an ardent abolitionist. His daughter, Susannah, was the mother of Charles Darwin.









You must be logged in to post a comment.