Posts Tagged ‘cognition’
“Sheer dumb sentience”*…
As the power of AI grows, we find ourselves searching for a way to tell it might– or has– become sentient. Kristen Andrews and Jonathan Birch suggest that we should look to the minds of animals…
… Last year, [Google engineer Blake] Lemoine leaked the transcript [of an exchange he’d had with LaMDA, a Google AI system] because he genuinely came to believe that LaMDA was sentient – capable of feeling – and in urgent need of protection.
Should he have been more sceptical? Google thought so: they fired him for violation of data security policies, calling his claims ‘wholly unfounded’. If nothing else, though, the case should make us take seriously the possibility that AI systems, in the very near future, will persuade large numbers of users of their sentience. What will happen next? Will we be able to use scientific evidence to allay those fears? If so, what sort of evidence could actually show that an AI is – or is not – sentient?
The question is vast and daunting, and it’s hard to know where to start. But it may be comforting to learn that a group of scientists has been wrestling with a very similar question for a long time. They are ‘comparative psychologists’: scientists of animal minds.
We have lots of evidence that many other animals are sentient beings. It’s not that we have a single, decisive test that conclusively settles the issue, but rather that animals display many different markers of sentience. Markers are behavioural and physiological properties we can observe in scientific settings, and often in our everyday life as well. Their presence in animals can justify our seeing them as having sentient minds. Just as we often diagnose a disease by looking for lots of symptoms, all of which raise the probability of having that disease, so we can look for sentience by investigating many different markers…
On learning from our experience of animals to assess AI sentience: “What has feelings?” from @KristinAndrewz and @birchlse in @aeonmag.
Apposite: “The Future of Human Agency” (a Pew round-up of expert opinion on the future impact of AI)
Provocative in a resonant way: “The Philosopher Who Believes in Living Things.”
* Kim Stanley Robinson, 2312
###
As we talk to the animals, we might send thoughtful birthday greetings to J. P. Guilford; he was born on this date in 1897. A psychologist, he’s best remembered as a developer and practitioner of psychometrics, the quantitative measurement of subjective psychological phenomena (such sensation, personality, attention).
Guilford’s Structure of Intellect (SI) theory rejected the view that intelligence could be characterized in a single numerical parameter. He proposed that three dimensions were necessary for accurate description: operations, content, and products.
Guilford also developed the concepts of “convergent” and “divergent” thinking, as part of work he did emphasizing the importance of creativity in industry, science, arts, and education, and in urging more research into it nature.
A Review of General Psychology survey, published in 2002, ranked Guilford as the 27th most cited psychologist of the 20th century.
“Poetry might be defined as the clear expression of mixed feelings”*…
Can artificial intelligence have those feelings? Scientist and poet Keith Holyoak explores:
… Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.
Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?…
A provocative consideration: “Can AI Write Authentic Poetry?” @mitpress.
Apposite: a fascinating Twitter thread on “why GPT3 algorithm proficiency at producing fluent, correct-seeming prose is an exciting opportunity for improving how we teach writing, how students learn to write, and how this can also benefit profs who assign writing, but don’t necessarily teach it.”
* W. H. Auden
###
As we ruminate on rhymes, we might send thoughtful birthday greetings to Michael Gazzaniga; he was born on this date in 1939. A leading researcher in cognitive neuroscience (the study of the neural basis of mind), his work has focused on how the brain enables humans to perform those advanced mental functions that are generally associated with what we call “the mind.” Gazzaniga has made significant contributions to the emerging understanding of how the brain facilitates such higher cognitive functions as remembering, speaking, interpreting, and making judgments.
“It takes something more than intelligence to act intelligently”*…
AI isn’t human, but that doesn’t mean, Nathan Gardels argues (citing three recent essays in Noema, the magazine that he edits), that it cannot be intelligent…
As the authors point out, “the dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data.”
Critics of this approach argue that its “insurmountable wall” is “symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.).”
Such reasoning would enable logical inferences that can apply what has been learned to unprogrammed contingencies, thus “completing patterns” by connecting the dots. LeCun and Browning argue that, as with the evolution of the human mind itself, in time and with manifold experiences, this ability may emerge as well from the neural networks of intelligent machines.
“Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach,” they contend. “They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoning, compositionality, multilingual competency, some logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.”
…
The philosopher Charles Taylor associates the breakthroughs of consciousness in that era with the arrival of written language. In his view, access to the stored memories of this first cloud technology enabled the interiority of sustained reflection from which symbolic competencies evolved.
This “transcendence” beyond oral narrative myth narrowly grounded in one’s own immediate circumstance and experience gave rise to what the sociologist Robert Bellah called “theoretic culture” — a mental organization of the world at large into the abstraction of symbols. The universalization of abstraction, in turn and over a long period of time, enabled the emergence of systems of thought ranging from monotheistic religions to the scientific reasoning of the Enlightenment.
Not unlike the transition from oral to written culture, might AI be the midwife to the next step of evolution? As has been written in this column before, we have only become aware of climate change through planetary computation that abstractly models the Earthly organism beyond what any of us could conceive out of our own un-encompassing knowledge or direct experience.
…
For Bratton and Agüera y Arcas, it comes down in the end to language as the “cognitive infrastructure” that can comprehend patterns, referential context and the relationality among them when facing novel events.
“There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real-world referents,” they observe.
As an “executable” translation of human language, code does not produce the same kind of intelligence that emerges from human consciousness, but is intelligence nonetheless. What is most likely to emerge in their view is not “artificial” intelligence when machines become more human, but “synthetic” intelligence, which fuses both.
As AI further develops through human prompt or a capacity to guide its own evolution by acquiring a sense of itself in the world, what is clear is that it is well on the way to taking its place alongside, perhaps conjoining and becoming synthesized with, other intelligences, from homo sapiens to insects to forests to the planetary organism itself…
AI takes its place among and may conjoin with other multiple intelligences: “Cognizant Machines: A What Is Not A Who.” Eminentl worth reading in full both the linked essay and the articles referenced in it.
* Dostoyevsky, Crime and Punishment
###
As we make room for company, we might recall that it was on this date in 1911 that a telegraph operator in the 7th floor of The New York Times headquarters in Times Square sent a message– “This message sent around the world”– that left at 7:00p, traveled over 28,000 miles, and was relayed by 16 different operators. It arrived back at the Times only 16.5 minutes later.
The “around the world telegraphy” record had been set in 1903, when President Roosevelt celebrated the completion of the Commercial Pacific Cable by sending the first round-the-world message in just 9 minutes. But that message had been given priority status; the Times wanted to see how long a regular message would take — and what route it would follow.
The building from which the message originated is now called One Times Square and is best known as the site of the New Year’s Eve ball drop.










You must be logged in to post a comment.