(Roughly) Daily

Posts Tagged ‘cognition

“Sheer dumb sentience”*…

The eyes of the conch snail

As the power of AI grows, we find ourselves searching for a way to tell it might– or has– become sentient. Kristen Andrews and Jonathan Birch suggest that we should look to the minds of animals…

… Last year, [Google engineer Blake] Lemoine leaked the transcript [of an exchange he’d had with LaMDA, a Google AI system] because he genuinely came to believe that LaMDA was sentient – capable of feeling – and in urgent need of protection.

Should he have been more sceptical? Google thought so: they fired him for violation of data security policies, calling his claims ‘wholly unfounded’. If nothing else, though, the case should make us take seriously the possibility that AI systems, in the very near future, will persuade large numbers of users of their sentience. What will happen next? Will we be able to use scientific evidence to allay those fears? If so, what sort of evidence could actually show that an AI is – or is not – sentient?

The question is vast and daunting, and it’s hard to know where to start. But it may be comforting to learn that a group of scientists has been wrestling with a very similar question for a long time. They are ‘comparative psychologists’: scientists of animal minds.

We have lots of evidence that many other animals are sentient beings. It’s not that we have a single, decisive test that conclusively settles the issue, but rather that animals display many different markers of sentience. Markers are behavioural and physiological properties we can observe in scientific settings, and often in our everyday life as well. Their presence in animals can justify our seeing them as having sentient minds. Just as we often diagnose a disease by looking for lots of symptoms, all of which raise the probability of having that disease, so we can look for sentience by investigating many different markers…

On learning from our experience of animals to assess AI sentience: “What has feelings?” from @KristinAndrewz and @birchlse in @aeonmag.

Apposite: “The Future of Human Agency” (a Pew round-up of expert opinion on the future impact of AI)

Provocative in a resonant way: “The Philosopher Who Believes in Living Things.”

* Kim Stanley Robinson, 2312

###

As we talk to the animals, we might send thoughtful birthday greetings to J. P. Guilford; he was born on this date in 1897. A psychologist, he’s best remembered as a developer and practitioner of psychometrics, the quantitative measurement of subjective psychological phenomena (such sensation, personality, attention).

Guilford’s Structure of Intellect (SI) theory rejected the view that intelligence could be characterized in a single numerical parameter. He proposed that three dimensions were necessary for accurate description: operations, content, and products.

Guilford also developed the concepts of “convergent” and “divergent” thinking, as part of work he did emphasizing the importance of creativity in industry, science, arts, and education, and in urging more research into it nature.

Review of General Psychology survey, published in 2002, ranked Guilford as the 27th most cited psychologist of the 20th century.

source

“Poetry might be defined as the clear expression of mixed feelings”*…

Can artificial intelligence have those feelings? Scientist and poet Keith Holyoak explores:

… Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?…

A provocative consideration: “Can AI Write Authentic Poetry?@mitpress.

Apposite: a fascinating Twitter thread on “why GPT3 algorithm proficiency at producing fluent, correct-seeming prose is an exciting opportunity for improving how we teach writing, how students learn to write, and how this can also benefit profs who assign writing, but don’t necessarily teach it.”

* W. H. Auden

###

As we ruminate on rhymes, we might send thoughtful birthday greetings to Michael Gazzaniga; he was born on this date in 1939. A leading researcher in cognitive neuroscience (the study of the neural basis of mind), his work has focused on how the brain enables humans to perform those advanced mental functions that are generally associated with what we call “the mind.” Gazzaniga has made significant contributions to the emerging understanding of how the brain facilitates such higher cognitive functions as remembering, speaking, interpreting, and making judgments.

source

Written by (Roughly) Daily

December 12, 2022 at 1:00 am

“The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office”*…

For as long as humans have thought, humans have thought about thinking. George Cave on the power and the limits of the metaphors we’ve used to do that…

For thousands of years, humans have described their understanding of intelligence with engineering metaphors. In the 3rd century BCE, the invention of hydraulics popularized the model of fluid flow (“humours”) in the body. This lasted until the 1500s, supplanted by the invention of automata and the idea of humans as complex machines. From electrical and chemical metaphors in the 1700s to advances in communications a century later, each metaphor reflected the most advanced thinking of that era. Today is no different: we talk of brains that store, process and retrieve memories, mirroring the language of computers.

I’ve always believed metaphors to be helpful and productive in communicating unfamiliar concepts. But this fascinating history of cognitive science metaphors shows that flawed metaphors can take hold and limit the scope for alternative ideas. In the worst case, the EU spent 10 years and $1.3 billion building a model of the brain based on the incorrect belief that the brain functions like a computer…

Thinking about thinking, from @George_Cave in @the_prepared.

Apposite: “Finding Language in the Brain.”

* Robert Frost

###

As we cogitate on cognition, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

Written by (Roughly) Daily

December 9, 2022 at 1:00 am

“There is nothing more tentative, nothing more empirical (superficially, at least) than the process of establishing an order among things; nothing that demands a sharper eye or a surer, better-articulated language”*

James Vincent on the emergence of earliest writing and its impact on culture, with special attention to the phenomenon of the “list” and its role in the birth of metrology…

Measurement was a crucial organizing principle in ancient Egypt, but metrology itself does not begin with nilometers. To understand its place in human culture, we have to trace its roots back further, to the invention of writing itself. For without writing, no measures can be recorded. The best evidence suggests that the written word was created independently thousands of years ago by a number of different cultures scattered around the world: in Mesopotamia, Mesoamerica, China, and Egypt. But it’s in Mesopotamia—present-day Iraq—where the practice is thought to have been invented first.

There’s some debate over whether this invention of writing enabled the first states to emerge, giving their rulers the ability to oversee and allocate resources, or whether it was the demands of the early states that in turn led to the invention of writing. Either way, the scribal arts offered dramatic new ways to process knowledge, allowing for not only superior organization, but also superior thinking. Some scholars argue that the splitting of noun and number on clay tablets didn’t just allow kings to better track their taxes but was tantamount to a cognitive revolution: a leap forward that allowed humans to abstract and categorize the world around them like never before.

Lists may not seem like cognitive dynamite, but their proliferation appears to have helped develop new modes of thought in early societies, encouraging us to think analytically about the world. “The list relies on discontinuity rather than continuity,” writes anthropologist Jack Goody. “[I]t encourages the ordering of the items, by number, by initial sound, by category, etc. And the existence of boundaries, external and internal, brings greater visibility to categories, at the same time as making them more abstract.”…

More at: “What If… Listicles Are Actually an Ancient Form of Writing and Narrative?” from @jjvincent in @lithub

* Michel Foucault

###

As we organize, we might recall that it was on this date in 1872 that the Mary Celeste (often erroneously referred to as Marie Celeste, per a Conan Doyle short story about the ship), an American-registered merchant brigantine, was discovered adrift and deserted in the Atlantic Ocean off the Azores Islands.

The Canadian brigantine Dei Gratia found her in a dishevelled but seaworthy condition under partial sail and with her lifeboat missing. The last entry in her log was dated ten days earlier. She had left New York City for Genoa on November 7 and was still amply provisioned when found. Her cargo of alcohol was intact, and the captain’s and crew’s personal belongings were undisturbed. None of those who had been on board were ever seen or heard from again.

At the salvage hearings in Gibraltar following her recovery, the court’s officers considered various possibilities of foul play, including mutiny by Mary Celeste‘s crew, piracy by the Dei Gratia crew or others, and conspiracy to carry out insurance or salvage fraud. No convincing evidence supported these theories, but unresolved suspicions led to a relatively low salvage award.

The inconclusive nature of the hearings fostered continued speculation as to the nature of the mystery. Hypotheses that have been advanced include the effects on the crew of alcohol fumes rising from the cargo, submarine earthquakes, waterspouts, attack by a giant squid, and paranormal intervention.

After the Gibraltar hearings, Mary Celeste continued in service under new owners. In 1885, her captain deliberately wrecked her off the coast of Haiti as part of an attempted insurance fraud.

The ship in 1861 (source)

Written by (Roughly) Daily

December 4, 2022 at 1:00 am

“It takes something more than intelligence to act intelligently”*…

AI isn’t human, but that doesn’t mean, Nathan Gardels argues (citing three recent essays in Noema, the magazine that he edits), that it cannot be intelligent…

As the authors point out, “the dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data.”

Critics of this approach argue that its “insurmountable wall” is “symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.).”

Such reasoning would enable logical inferences that can apply what has been learned to unprogrammed contingencies, thus “completing patterns” by connecting the dots. LeCun and Browning argue that, as with the evolution of the human mind itself, in time and with manifold experiences, this ability may emerge as well from the neural networks of intelligent machines.

“Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach,” they contend. “They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoning, compositionality, multilingual competency, some logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.”

The philosopher Charles Taylor associates the breakthroughs of consciousness in that era with the arrival of written language. In his view, access to the stored memories of this first cloud technology enabled the interiority of sustained reflection from which symbolic competencies evolved.

This “transcendence” beyond oral narrative myth narrowly grounded in one’s own immediate circumstance and experience gave rise to what the sociologist Robert Bellah called “theoretic culture” — a mental organization of the world at large into the abstraction of symbols. The universalization of abstraction, in turn and over a long period of time, enabled the emergence of systems of thought ranging from monotheistic religions to the scientific reasoning of the Enlightenment.

Not unlike the transition from oral to written culture, might AI be the midwife to the next step of evolution? As has been written in this column before, we have only become aware of climate change through planetary computation that abstractly models the Earthly organism beyond what any of us could conceive out of our own un-encompassing knowledge or direct experience.

For Bratton and Agüera y Arcas, it comes down in the end to language as the “cognitive infrastructure” that can comprehend patterns, referential context and the relationality among them when facing novel events.

“There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real-world referents,” they observe.

As an “executable” translation of human language, code does not produce the same kind of intelligence that emerges from human consciousness, but is intelligence nonetheless. What is most likely to emerge in their view is not “artificial” intelligence when machines become more human, but “synthetic” intelligence, which fuses both.

As AI further develops through human prompt or a capacity to guide its own evolution by acquiring a sense of itself in the world, what is clear is that it is well on the way to taking its place alongside, perhaps conjoining and becoming synthesized with, other intelligences, from homo sapiens to insects to forests to the planetary organism itself…

AI takes its place among and may conjoin with other multiple intelligences: “Cognizant Machines: A What Is Not A Who.” Eminentl worth reading in full both the linked essay and the articles referenced in it.

* Dostoyevsky, Crime and Punishment

###

As we make room for company, we might recall that it was on this date in 1911 that a telegraph operator in the 7th floor of The New York Times headquarters in Times Square sent a message– “This message sent around the world”– that left at 7:00p, traveled over 28,000 miles, and was relayed by 16 different operators. It arrived back at the Times only 16.5 minutes later.

The “around the world telegraphy” record had been set in 1903, when President Roosevelt celebrated the completion of the Commercial Pacific Cable by sending the first round-the-world message in just 9 minutes. But that message had been given priority status; the Times wanted to see how long a regular message would take — and what route it would follow.

The building from which the message originated is now called One Times Square and is best known as the site of the New Year’s Eve ball drop.

source

Written by (Roughly) Daily

August 20, 2022 at 1:00 am