(Roughly) Daily

Posts Tagged ‘Ted Chiang

“I’ve been discovering, much to my dismay, that I’m not a criminal mastermind or anything. I’m just brute force and my powers in no way include super-intelligence, which kind of pisses me off.”*…

A young boy with short hair, wearing a collared shirt, is intently reading a book with a focused expression in a dimly lit setting.

How do we accomodate ourselves to the prospect of an intelligence far greater than our own? In a consideration of J.D. Beresford’s The Hampdenshire Wonder (the first recognized appearance of the concept in modern Englis-language literature), Ted Chiang unspools the intellectual and cultural history of this now-prevalant trope…

J.D. Beresford’s The Hampdenshire Wonder is generally considered to be the first fictional treatment of superhuman intelligence, or “superintelligence.” This is a familiar trope for readers of science fiction today, but when the novel was originally published in 1911 it was anything but. What intellectual soil needed to be tilled before this idea could sprout?

At least since Plato, Western thought has clung to the idea of a Great Chain of Being, also known as the scala naturae, a system of classification in which plants rank below animals; humans rank above animals but below angels; and angels rank above humans but below God. There was no implied movement to this hierarchy; no one expected that plants would turn into animals given enough time, or that humans would turn into angels.

But by the 1800s, naturalists like Lamarck were questioning the assumption that species were immutable; they suggested that over time organisms actually grew more complex, with the human species as the pinnacle of the process. Darwin brought these speculations into public consciousness in 1859 with On the Origin of Species, and while he emphasized that evolution branches in many directions without any predetermined goal in mind, most people came to think of evolution as a linear progression.

Only then, I think, was it possible to conceive of humanity as a point on a line that could keep extending, to imagine something that would be more than human without being supernatural.

Darwin’s half-cousin, Francis Galton, was the first to suggest the idea that mental attributes like intelligence could be quantified. Galton published a volume called Hereditary Genius in 1869, and during the 1880s and ’90s he measured people’s reaction times as a way of gauging their mental ability, pioneering what we now call the field of psychometrics. By 1905, Alfred Binet had introduced a questionnaire to measure children’s intelligence; such questionnaires would evolve into IQ tests. The validity of psychometrics is quite controversial nowadays, as people disagree about what “intelligence” means and to what extent it can be measured. Some modern cognitive scientists do not consider the term intelligence particularly useful, instead preferring to use more specific terms like executive function, attentional control, or theory of mind. In the future “intelligence” may be regarded as a historical curiosity, like phlogiston, but until we develop a more precise vocabulary, we continue to use the term. Our contemporary notion of intelligence first gained currency around the time that Beresford was writing, and one can see how that converged with the idea of the superhuman in The Hampdenshire Wonder.

The titular character of The Hampdenshire Wonder is a boy named Victor Stott…

… Victor is born with an enormous head but an ordinary body, which disappoints his athletic father but also points to certain assumptions we have about the relationship between the mental and the physical. Beresford could have made Victor both an athlete and a genius, but he opted instead to follow a trope perhaps originated by Wells: the idea that evolution is pushing humanity toward a giant-brained phenotype, which is itself implicitly premised on the idea that mental ability and physical ability are in opposition to one another. This has remained a common trope in science fiction, although there are occasional depictions of mental and physical ability going hand in hand…

[Chiang traces the development of the “superintelligence,” the problems it raises, and the ways that they are treated in The Hampdenshire Wonder and elsewhere– “whatever your wisdom, you have to live in a world of comparative ignorance, a world which cannot appreciate you, but which can and will fall back upon the compelling power of the savage—the resort to physical, brute force.”…]

… In 1993 [Vernor] Vinge [here] argued that progress in computer technology would inevitably lead to a machine form of superintelligence. He proposed the term “the singularity” to describe the date—in the next few decades—beyond which events would be impossible to imagine. Since then, the technological singularity has largely replaced biological superintelligence as a trope in science fiction. More than that, it has become a trope in the Silicon Valley tech industry, giving rise to a discourse that is positively eschatological in tone. Superintelligence lies on the other side of a conceptual event horizon. When considered as a purely fictional idea, it imposes a limit on the kind of narratives one can tell about it. But when you start imagining it as something that could exist in reality, it becomes an end to human narratives altogether.

The Hampdenshire Wonder does posit a kind of eschatological scenario, but of a completely different order. After Victor’s downfall, Challis recounts the conclusion he came to after a conversation he’d had with the child, revealing a profound terror about the finiteness of knowledge:

Don’t you see that ignorance is the means of our intellectual pleasure? It is the solving of the problem that brings enjoyment—the solved problem has no further interest. So when all is known, the stimulus for action ceases; when all is known there is quiescence, nothingness. Perfect knowledge implies the peace of death

… The idea that the search for understanding will inevitably lead to a kind of cognitive heat death is an interesting one. I don’t believe it and I doubt any scientist believes it, so it’s curious that Beresford—clearly an admirer of scientists—apparently did. Challis talks about the need for mysteries that elude explanation, which is a surprisingly anti-intellectual stance to find in a novel about superintelligence. While there is arguably a strain of anti-intellectualism in stories where superintelligent characters bring about their own downfall, those can just as easily be understood as warnings about hubris, a literary device employed as far back as the first recorded literature, “The Epic of Gilgamesh.” But The Hampdenshire Wonder, in its final pages, is making an altogether different claim: The pursuit of knowledge itself is ultimately self-defeating.

Nowadays we associate the word “prodigy” with precocious children, but in centuries past the word was used to describe anything monstrous. Victor Stott clearly qualifies as a prodigy in the modern sense, but he qualifies in the older sense too: Not only does he frighten the ignorant and superstitious, he induces a profound terror in the educated and intellectual. Seen in this light, the first novel about superintelligence is actually a work of horror SF, a cautionary tale about the dangers of knowing too much…

Superintelligence and its discontents, from @ted-chiang.bsky.social‬ in @literaryhub.bsky.social‬.

Another powerful (and not unrelated) piece from Chiang: “Will A.I. Become the New McKinsey?

Kelly Thompson, The Girl Who Would Be King

###

As we wrestle with reason, we might wish a Joyeux Anniversaire to silk weaver Joseph Marie Jacquard; he was born on this date in 1752.  Jacquard’s 1805 invention of the programmable power loom, controlled by a series of punched “instruction” cards and capable of weaving essentially any pattern, ignited a technological revolution in the textile industry… indeed, it set off a chain of revolutions: it inspired Charles Babbage in the design of his “Difference Engine” (the ur-computer), and later, Herman Hollerith, who used punched cards in the “tabulator” that he created for the 1890 Census… and in so doing, pioneered the use of those cards for computer input… which is to say that Jacquard helped create the preconditions for AI (among all of the other things that computers can do).

Portrait of Joseph Marie Jacquard, a 19th-century inventor known for creating the programmable power loom.

source

“The key to artificial intelligence has always been the representation”*…

AI is coming for search. OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which, asks the estimable Ted Chiang, do we prefer?

… Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse…

Should we bank on AI in search? “ChatGPT Is a Blurry JPEG of the Web,” in @NewYorker.

For more of Chiang’s thoughts on AI, listen to (or read) his interview with Ezra Klein, in which he suggest that “most fears about A.I. are best understood as fears about capitalism.”

Also apposite: “AI, Minus the Hype” and “Imagining The QAnon Of The AI Era.”

Jeff Hawkins (who seems to be agreeing with Baudrillard that “the sad thing about artificial intelligence is that it lacks artifice and therefore intelligence”)

###

As we fiddle with our filters, we might spare a thought for a man whose work has created a gargantuan training set for AI: Alphonse Bertillon; he died on this date in 1914. A police officer and biometrics researcher, he applied the anthropological technique of anthropometry to law enforcement, creating an identification system based on physical measurements. Anthropometry was the first scientific system used by police to identify criminals; before that time, criminals could only be identified by name or photograph. While the method was eventually eclipsed by fingerprinting, then DNA analysis, it is still in use.

Bertillon is also the inventor of the mug shot. Photographing of criminals had begun in the 1840s only a few years after the invention of photography, but in 1888 that Bertillon standardized the process.

Bertillon’s work has been hugely impactful– and lies at the root of many AI systems being developed to finger criminals (especially via facial recognition). It’s worth remembering that his (flawed) evidence was used to wrongly convict Alfred Dreyfus in the infamous Dreyfus affair.

Bertillon’s mug shot self portrait (source)

“The purpose of a writer is to keep civilization from destroying itself”*…

 

Chiang

 

Traditional “good vs. evil” stories follow a certain pattern: the world starts out as a good place, evil intrudes, good defeats evil, and the world goes back to being a good place. These stories are all about restoring the status quo, so they are implicitly conservative. Real science fiction stories follow a different pattern: the world starts out as a familiar place, a new discovery or invention disrupts everything, and the world is forever changed. These stories show the status quo being overturned, so they are implicitly progressive. (This observation is not original to me; it’s something that scholars of science fiction have long noted.) This was in the context of a discussion about the role of dystopias in science fiction. I said that while some dystopian stories suggest that doom is unavoidable, other ones are intended as cautionary tales, which implies we can do something to avoid the undesirable outcome…

A lot of dystopian stories posit variations on a Mad Max world where marauders roam the wasteland. That’s a kind of change no one wants to see. I think those qualify as doom. What I mean by disruption is not the end of civilization, but the end of a particular way of life. Aristocrats might have thought the world was ending when feudalism was abolished during the French Revolution, but the world didn’t end; the world changed. (The critic John Clute has said that the French Revolution was one of the things that gave rise to science fiction.)…

The familiar is always comfortable, but we need to make a distinction between what is actually desirable and what is simply what we’re accustomed to; sometimes those are the same, and sometimes they are not. The people who are the happiest with the status quo are the ones who benefit most from it, which is why the wealthy are usually conservative; the existing order works to their advantage. For example, right now there’s a discussion taking place about canceling student debt, and a related discussion about why there is such a difference in the type of financial relief available to individuals as opposed to giant corporations. The people who will be happiest to return to our existing system of debt are the ones who benefit from it, and making them uncomfortable might be a good idea…

How we may never go “back to normal”—and why that might be a good thing– Halimah Marcus‘ (@HalimahMarcus) interviews the estimable Ted Chiang.  Read it in full: “Ted Chiang Explains the Disaster Novel We All Suddenly Live In.”

* Albert Camus

###

As we put it all into perspective, we might recall that it was on this date in 1977 that Star Wars was released.  An epic space opera directed and co-written by George Lucas, it was both a box-office and critical success.  The highest-grossing film ever at the time (until the release of E.T. the Extra-Terrestrial in 1982), it is, when adjusted for inflation, the second-highest-grossing film in North America (behind Gone With The Wind).

The film won 6 Oscars for a variety of technical achievements.  As film critic Roger Ebert wrote in his book The Great Movies, “Like The Birth of a Nation and Citizen Kane, Star Wars was a technical watershed that influenced many of the movies that came after.”  It began a new generation of special effects and high-energy motion pictures.  The film was one of the first films to link genres together to invent a new, high-concept genre for filmmakers to build upon.  And, with Steven Spielberg’s Jaws, it shifted the film industry’s focus away from the personal filmmaking of the 1970s and toward fast-paced, big-budget blockbusters for younger audiences.

The film has been reissued many times and launched an industry of tie-in products, including novels, comics, video games, amusement park attractions, and merchandise including toys, games, and clothing. The film’s success led to two critically and commercially successful sequels, The Empire Strikes Back and Return of the Jedi, and later to a prequel trilogy, a sequel trilogy, two anthology films and various spin-off TV series.

220px-StarWarsMoviePoster1977 source

 

 

Written by (Roughly) Daily

May 25, 2020 at 1:01 am