(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Sheer dumb sentience”*…

The eyes of the conch snail

As the power of AI grows, we find ourselves searching for a way to tell it might– or has– become sentient. Kristen Andrews and Jonathan Birch suggest that we should look to the minds of animals…

… Last year, [Google engineer Blake] Lemoine leaked the transcript [of an exchange he’d had with LaMDA, a Google AI system] because he genuinely came to believe that LaMDA was sentient – capable of feeling – and in urgent need of protection.

Should he have been more sceptical? Google thought so: they fired him for violation of data security policies, calling his claims ‘wholly unfounded’. If nothing else, though, the case should make us take seriously the possibility that AI systems, in the very near future, will persuade large numbers of users of their sentience. What will happen next? Will we be able to use scientific evidence to allay those fears? If so, what sort of evidence could actually show that an AI is – or is not – sentient?

The question is vast and daunting, and it’s hard to know where to start. But it may be comforting to learn that a group of scientists has been wrestling with a very similar question for a long time. They are ‘comparative psychologists’: scientists of animal minds.

We have lots of evidence that many other animals are sentient beings. It’s not that we have a single, decisive test that conclusively settles the issue, but rather that animals display many different markers of sentience. Markers are behavioural and physiological properties we can observe in scientific settings, and often in our everyday life as well. Their presence in animals can justify our seeing them as having sentient minds. Just as we often diagnose a disease by looking for lots of symptoms, all of which raise the probability of having that disease, so we can look for sentience by investigating many different markers…

On learning from our experience of animals to assess AI sentience: “What has feelings?” from @KristinAndrewz and @birchlse in @aeonmag.

Apposite: “The Future of Human Agency” (a Pew round-up of expert opinion on the future impact of AI)

Provocative in a resonant way: “The Philosopher Who Believes in Living Things.”

* Kim Stanley Robinson, 2312

###

As we talk to the animals, we might send thoughtful birthday greetings to J. P. Guilford; he was born on this date in 1897. A psychologist, he’s best remembered as a developer and practitioner of psychometrics, the quantitative measurement of subjective psychological phenomena (such sensation, personality, attention).

Guilford’s Structure of Intellect (SI) theory rejected the view that intelligence could be characterized in a single numerical parameter. He proposed that three dimensions were necessary for accurate description: operations, content, and products.

Guilford also developed the concepts of “convergent” and “divergent” thinking, as part of work he did emphasizing the importance of creativity in industry, science, arts, and education, and in urging more research into it nature.

Review of General Psychology survey, published in 2002, ranked Guilford as the 27th most cited psychologist of the 20th century.

source

“The key to artificial intelligence has always been the representation”*…

AI is coming for search. OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which, asks the estimable Ted Chiang, do we prefer?

… Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse…

Should we bank on AI in search? “ChatGPT Is a Blurry JPEG of the Web,” in @NewYorker.

For more of Chiang’s thoughts on AI, listen to (or read) his interview with Ezra Klein, in which he suggest that “most fears about A.I. are best understood as fears about capitalism.”

Also apposite: “AI, Minus the Hype” and “Imagining The QAnon Of The AI Era.”

Jeff Hawkins (who seems to be agreeing with Baudrillard that “the sad thing about artificial intelligence is that it lacks artifice and therefore intelligence”)

###

As we fiddle with our filters, we might spare a thought for a man whose work has created a gargantuan training set for AI: Alphonse Bertillon; he died on this date in 1914. A police officer and biometrics researcher, he applied the anthropological technique of anthropometry to law enforcement, creating an identification system based on physical measurements. Anthropometry was the first scientific system used by police to identify criminals; before that time, criminals could only be identified by name or photograph. While the method was eventually eclipsed by fingerprinting, then DNA analysis, it is still in use.

Bertillon is also the inventor of the mug shot. Photographing of criminals had begun in the 1840s only a few years after the invention of photography, but in 1888 that Bertillon standardized the process.

Bertillon’s work has been hugely impactful– and lies at the root of many AI systems being developed to finger criminals (especially via facial recognition). It’s worth remembering that his (flawed) evidence was used to wrongly convict Alfred Dreyfus in the infamous Dreyfus affair.

Bertillon’s mug shot self portrait (source)

“On the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on our way to using it to create a world of suffering and chaos. Paradoxical, no?”*…

Joseph Weizenbaum, a distinguished professor at MIT, was one of the fathers of artificial intelligence and computing as we know it; he was also one of his earliest critics– one whose concerns remain all too current. After a review of his warnings, Librarian Shipwreck shares a still-relevant set of questions Weizenbaum proposed…

At the end of his essay “Once more—A Computer Revolution” which appeared in the Bulletin of the Atomic Scientists in 1978, Weizenbaum concluded with a set of five questions. As he put it, these were the sorts of questions that “are almost never asked” when it comes to this or that new computer related development. These questions did not lend themselves to simple yes or no answers, but instead called for serious debate and introspection. Thus, in the spirit of that article, let us conclude this piece not with definitive answers, but with more questions for all of us to contemplate. Questions that were “almost never asked” in 1978, and which are still “almost never asked” in 2023. They are as follows:

• Who is the beneficiary of our much-advertised technological progress and who are its victims?

• What limits ought we, the people generally and scientists and engineers particularly, to impose on the application of computation to human affairs?

• What is the impact of the computer, not only on the economies of the world or on the war potential of nations, etc…but on the self-image of human beings and on human dignity?

• What irreversible forces is our worship of high technology, symbolized most starkly by the computer, bringing into play?

• Will our children be able to live with the world we are here and now constructing?

As Weizenbaum put it “much depends on answers to these questions.”

Much still depends on answers to these questions.

Eminently worth reading in full: “‘Computers enable fantasies’ – on the continued relevance of Weizenbaum’s warnings,” from @libshipwreck.

See also: “An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum.”

* Joseph Weizenbaum (1983)

###

As we stay grounded, we might spare a thought for George Stibitz; he died on this date in 1995. A Bell Labs researcher, he was known for his work in the 1930s and 1940s on the realization of Boolean logic digital circuits using electromechanical relays as the switching element– work for which he is internationally recognized as one of the fathers of the modern digital computer.

In 1937, Stibitz, a scientist at Bell Laboratories built a digital machine based on relays, flashlight bulbs, and metal strips cut from tin-cans. He called it the “Model K” because most of it was constructed on his kitchen table. It worked on the principle that if two relays were activated they caused a third relay to become active, where this third relay represented the sum of the operation. Then, in 1940, he gave a demonstration of the first remote operation of a computer.

source

“I would rather have questions that can’t be answered than answers that can’t be questioned”*…

… or, as Confucius would have it, “real knowledge is to know the extent of one’s ignorance.” Happily Wikenigma is here to help…

Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.

Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [949 so far]

That’s to say, a compendium of so-called ‘Known Unknowns’…

Consider, for example…

How do marine turtle accurately migrate thousands of kilometers for nesting?

Can Beal’s conjecture be proved?

Can one solve the “envelope paradox”?

Do “naked singularities” exist?

What is the etymology of the word “plot” (which appears only in English)?

What were the purposes of “Perforated Batons,” man-made historical artifacts formed from deer antlers, dating back 12,000-24,000 years and found widely across Europe?

What are the function, importance, and evolutionary history of human “inner speech”?

One could– and should– go on: Wikenigma, via @Recomendo6.

* Richard Feynman

###

As we wonder, we might spare a thought for a man who embodied curiosity, Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

source

Written by (Roughly) Daily

January 24, 2023 at 1:00 am

“Poetry might be defined as the clear expression of mixed feelings”*…

Can artificial intelligence have those feelings? Scientist and poet Keith Holyoak explores:

… Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?…

A provocative consideration: “Can AI Write Authentic Poetry?@mitpress.

Apposite: a fascinating Twitter thread on “why GPT3 algorithm proficiency at producing fluent, correct-seeming prose is an exciting opportunity for improving how we teach writing, how students learn to write, and how this can also benefit profs who assign writing, but don’t necessarily teach it.”

* W. H. Auden

###

As we ruminate on rhymes, we might send thoughtful birthday greetings to Michael Gazzaniga; he was born on this date in 1939. A leading researcher in cognitive neuroscience (the study of the neural basis of mind), his work has focused on how the brain enables humans to perform those advanced mental functions that are generally associated with what we call “the mind.” Gazzaniga has made significant contributions to the emerging understanding of how the brain facilitates such higher cognitive functions as remembering, speaking, interpreting, and making judgments.

source

Written by (Roughly) Daily

December 12, 2022 at 1:00 am