Posts Tagged ‘HAL 9000’
“[They] would think that the truth is nothing but the shadows cast by the artifacts.”*…
How do AI models “understand” and represent reality? Is the inside of a vision model at all like a language model? As Ben Brubaker reports, researchers argue that as the models grow more powerful, they may be converging toward a singular “Platonic” way to represent the world…
Read a story about dogs, and you may remember it the next time you see one bounding through a park. That’s only possible because you have a unified concept of “dog” that isn’t tied to words or images alone. Bulldog or border collie, barking or getting its belly rubbed, a dog can be many things while still remaining a dog.
Artificial intelligence systems aren’t always so lucky. These systems learn by ingesting vast troves of data in a process called training. Often, that data is all of the same type — text for language models, images for computer vision systems, and more exotic kinds of data for systems designed to predict the odor of molecules or the structure of proteins. So to what extent do language models and vision models have a shared understanding of dogs?
Researchers investigate such questions by peering inside AI systems and studying how they represent scenes and sentences. A growing body of research has found that different AI models can develop similar representations, even if they’re trained using different datasets or entirely different data types. What’s more, a few studies have suggested that those representations are growing more similar as models grow more capable. In a 2024 paper, four AI researchers at the Massachusetts Institute of Technology argued that these hints of convergence are no fluke. Their idea, dubbed the Platonic representation hypothesis, has inspired a lively debate among researchers and a slew of follow-up work.
The team’s hypothesis gets its name from a 2,400-year-old allegory by the Greek philosopher Plato. In it, prisoners trapped inside a cave perceive the world only through shadows cast by outside objects. Plato maintained that we’re all like those unfortunate prisoners. The objects we encounter in everyday life, in his view, are pale shadows of ideal “forms” that reside in some transcendent realm beyond the reach of the senses.
The Platonic representation hypothesis is less abstract. In this version of the metaphor, what’s outside the cave is the real world, and it casts machine-readable shadows in the form of streams of data. AI models are the prisoners. The MIT team’s claim is that very different models, exposed only to the data streams, are beginning to converge on a shared “Platonic representation” of the world behind the data.
“Why do the language model and the vision model align? Because they’re both shadows of the same world,” said Phillip Isola, the senior author of the paper.
Not everyone is convinced. One of the main points of contention involves which representations to focus on. You can’t inspect a language model’s internal representation of every conceivable sentence, or a vision model’s representation of every image. So how do you decide which ones are, well, representative? Where do you look for the representations, and how do you compare them across very different models? It’s unlikely that researchers will reach a consensus on the Platonic representation hypothesis anytime soon, but that doesn’t bother Isola.
“Half the community says this is obvious, and the other half says this is obviously wrong,” he said. “We were happy with that response.”…
Read on: “Distinct AI Models Seem To Converge On How They Encode Reality,” from @quantamagazine.bsky.social.
Bracket with: “AGI is here (and I feel fine),” from Robin Sloan and “We Need to Talk About How We Talk About ‘AI’,” from Emily Bender and Nanna Inie.
* from Socrates “Allegory of the Cave,” in Plato’s Republic (Book VII)
###
As we interrogate ideas and Ideas, we might recall that it was on this date that the fictional HAL 9000 computer became operational, according to Arthur C. Clarke’s 2001: A Space Odyssey., in which the artificially-intelligent computer states: “I am a HAL 9000 computer, Production Number 3. I became operational at the HAL Plant in Urbana, Illinois, on January 12, 1997.” (Kubrik’s 1968 movie adaptation put his birthdate in 1992.)
“Human intelligence is among the most fragile things in nature. It doesn’t take much to distract it, suppress it, or even annihilate it.”*…
As Sarah O’Connor observes, technology has changed the way many of us consume information, from complex pieces of writing to short video clips…
The year was 1988, a former Hollywood actor was in the White House, and Postman was worried about the ascendancy of pictures over words in American media, culture and politics. Television “conditions our minds to apprehend the world through fragmented pictures and forces other media to orient themselves in that direction,” he argued in an essay in his book Conscientious Objections. “A culture does not have to force scholars to flee to render them impotent. A culture does not have to burn books to assure that they will not be read . . . There are other ways to achieve stupidity.”
What might have seemed curmudgeonly in 1988 reads more like prophecy from the perspective of 2024. This month, the OECD released the results of a vast exercise: in-person assessments of the literacy, numeracy and problem-solving skills of 160,000 adults aged 16-65 in 31 different countries and economies. Compared with the last set of assessments a decade earlier, the trends in literacy skills were striking. Proficiency improved significantly in only two countries (Finland and Denmark), remained stable in 14, and declined significantly in 11, with the biggest deterioration in Korea, Lithuania, New Zealand and Poland.
Among adults with tertiary-level education (such as university graduates), literacy proficiency fell in 13 countries and only increased in Finland, while nearly all countries and economies experienced declines in literacy proficiency among adults with below upper secondary education. Singapore and the US had the biggest inequalities in both literacy and numeracy.
“Thirty per cent of Americans read at a level that you would expect from a 10-year-old child,” Andreas Schleicher, director for education and skills at the OECD, told me — referring to the proportion of people in the US who scored level 1 or below in literacy. “It is actually hard to imagine — that every third person you meet on the street has difficulties reading even simple things.”
In some countries, the deterioration is partly explained by an ageing population and rising levels of immigration, but Schleicher says these factors alone do not fully account for the trend. His own hypothesis would come as no surprise to Postman: that technology has changed the way many of us consume information, away from longer, more complex pieces of writing, such as books and newspaper articles, to short social media posts and video clips.
At the same time, social media has made it more likely that you “read stuff that confirms your views, rather than engages with diverse perspectives, and that’s what you need to get to [the top levels] on the [OECD literacy] assessment, where you need to distinguish fact from opinion, navigate ambiguity, manage complexity,” Schleicher explained.
The implications for politics and the quality of public debate are already evident. These, too, were foreseen. In 2007, writer Caleb Crain wrote an article called “Twilight of the Books” in The New Yorker magazine about what a possible post-literate culture might look like. In oral cultures, he wrote, cliché and stereotype are valued, conflict and name-calling are prized because they are memorable, and speakers tend not to correct themselves because “it is only in a literate culture that the past’s inconsistencies have to be accounted for”. Does that sound familiar?…
One recalls Plato’s report that Socrates lamented the introduction of writing (on the grounds that it would erode the centrality of the memory and memorization and the tradition of oral disputation). And one reckons that in retrospect, even as one acknowledges that Socrates wasn’t wrong, one is not sorry that writing came to play the foundational role that it has in scholarship, culture, and commerce.
So perhaps we’re just in the first steps of a transition on the other side of which a new kind of literacy has displaced the current one (and advanced our state of being in the same way that writing has). Perhaps. Even then, in the moment it’s anxiety-provoking: even if we are bound for a new (higher-order?) literacy, it’s the curse of the earlier phases of a tectonic cultural shift that what we’re losing is much clearer than what we may gain.
“Are we becoming a post-literate society?” (gift article) by @sarahoconnorft.bsky.social in @financialtimes.com.
(The full OECD report- which includes a larger version of the chart above– is here.)
See also: “Stop speedrunning to a dystopia,” from Erik Hoel.
* Neil Postman, Amusing Ourselves to Death
###
As we fumble toward the future, we might recall that it was on this date in 1992 that HAL 9000, the AI character (and main antagonist) in Arthur C. Clarke’s (and Stanley Kubrick’s) Space Odyssey series.
More specifically: In the film, HAL became operational on 12 January 1992, at the HAL Laboratories in Urbana, Illinois, as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke’s novel written and released in conjunction with the movie.





You must be logged in to post a comment.