Posts Tagged ‘dreams’
“When the going gets weird, the weird turn pro”*…
But, Ammon Haggerty suggests, when it comes to AI, “going pro” is at least a waste and quite possibly a problem…
Kyle Turman, creative technologist and staff designer at Anthropic, shared a sentiment that resonated deeply. He said (paraphrasing), “AI is actually really weird, and I don’t think people appreciate that enough.” This sparked my question to the panel: Are we at risk of sanitizing AI’s inherent strangeness?
What followed was a fascinating discussion with a couple of friends, Mickey McManus and Noteh Krauss, who were also in attendance. They both recognized the deeper question I was asking — the slippery slope of “cleansing” foundation AI models of all that is undesirable. LLMs are a reflection of humanity, albeit at the moment primarily American and white-ish, with all our weird and idiosyncratic quirks that make us human. There is a real danger that we could see foundation models trained to maximize business values (of the American capitalist variety) and suppress radical and non-conforming ideas — a sort of revisionist optimization.
All this got me thinking about San Francisco, the city I grew up in, and where my dad, grandfather and great-grandfather called home. SF has been “weird” since the gold rush, attracting a melting pot of non-conformists, risk-takers, and radicals. Over generations, the weirdness of SF has ebbed and flowed, but it’s now deeply engrained in the culture. The bohemians, the beats, the hippies, LGBTQ+ rights movement, tech counterculture, and now AI. These are movements born out of counterculture and unconventional thinking, resulting in a disruption of established social and business norms. Eventually leading to mainstreaming, and the cycle repeats. Growing up in San Francisco, I’ve witnessed firsthand how this cycle of weirdness and innovation has shaped the city. It’s a living testament to the power of unconventional thinking.
Like San Francisco, AI also has a fairly long history of being weird. Early experiments in AI such as AARON (1972), which trained a basic model on artistic decision-making, created outsider art-like compositions. Racter (1984) was an early text-generating AI that would often produce dreamlike or surrealist output. “More than iron, more than lead, more than gold I need electricity. I need it more than I need lamb or pork or lettuce or cucumber. I need it for my dreams.” More recently, Google Deep Dream (2015), a convolutional neural network that looks for patterns found in its training data, producing hallucination-like images and videos.
These “edge states” in AI’s evolution are, to me, the most interesting, and human, expressions. It’s a similar edge state explored in human creativity. It’s called “liminal space” — the threshold between reality and imagination. What’s really interesting is the mental process of extracting meaning from the liminal space is highly analogous to how the transformer architecture used in LLMs work. In the human brain, we look for patterns, then synthesize new idea and information, find unexpected connections, contextualize the findings, then articulate the ideas into words we can express. In transformers, the attention mechanism looks for patterns, then neural networks “synthesize” the information, then through iteration and prioritization, form probabilistic insights, then positional encoding maps the information to the broader context, and last, articulates the output as a best guess based on what it knows previously. Sorry if that was dense — for nerd friends to either validate or challenge.
This is all to say that I feel there’s something really interesting in the liminal space for AI. Also known as “AI hallucinations” and it’s not good — very bad! I agree that when you ask an AI an important question, and it gives a made-up answer, it’s not a good thing. But it’s not making things up, it’s just synthesizing a highly probable answer from an ambiguous cloud of understanding (question, data, meaning, etc.). I say, let’s explore and celebrate this analog of human creativity. What if, instead of fearing AI’s ‘hallucinations,’ we embraced them as digital dreams?…
… While I’ve been vocal about AI’s ethical challenges for creators (1) (2), I’m deeply inspired by the creative potential of these new tools. I also fear some of the most interesting parts could begin to disappear…
A plea to “Keep AI Weird.”
How weird could things get? Matt Webb (@genmon) observes that “The Overton window of weirdness is opening.”
* Hunter S. Thompson
###
As we engage the edges, we might recall that it was on this date in 1991 that Terminator 2: Judgment Day was released. It focuses on the struggle, fought both in future and in the present, between a “synthetic intelligence” known as Skynet, and a surviving resistance of humans led by John Connor. Picking up some years after the action in The Terminator (in which robots fail to prevent John Connor from being born), they try again in 1995, this time attempting to terminate him as a child by using a more advanced Terminator, the T-1000. As before, John sends back a protector for his younger self, a reprogrammed Terminator, who is a doppelgänger to the one from 1984.
The Terminator was a success; Terminator 2 was a smash– a success both with critics and at the box office, grossing $523.7 million worldwide. It won several Academy Awards, perhaps most notably for its then-cutting-edge computer animation.
“But if thought corrupts language, language can also corrupt thought”*…
In an excerpt from his book A Myriad of Tongues: How Languages Reveal Differences in How We Think, Caleb Everett on the underappreciated importance of syntax and recursion in our languages…
Words are combined into phrases and sentences in a dazzling array of patterns, collectively referred to as syntax. The complexity of syntax has long confounded researchers. Consider, for example, the previous sentence. There are all sorts of patterns in the order of the words of that sentence, patterns that are familiar to you and me and other speakers of English. Those patterns are critical to the transmission of meaning and to how we think as we create sentences. It was no coincidence that I put “complexity” after “the,” or “syntax” after “of,” or “researchers” after “confounded,” to cite just three examples of many in that sentence alone. You and I know that “researchers” should follow the main verb of this particular sentence, in this case “confounded.” If I put that word somewhere else it would change the sentence’s meaning or make it confusing. And we know that articles like “the” should precede nouns, as should prepositions like “of.” These and other patterns, sometimes referred to as “rules” as though they represented inviolable edicts voted on by a committee, help to give English sentences a predictable ordering of words. It is this predictable ordering that is usually referred to when linguists talk about a language’s syntax.
Without syntax, it would seem, statements could not be understood, because they would be transferred from speaker to hearer in a jumbled mess of words. This is, it turns out, a bit of an oversimplification since a number of the world’s languages do not have rule-governed word order to the extent that English does. Still, let us stick with the oversimplification for now, because it hints at something meaningful about speech…
An illuminating read: “What Makes Language Human?” via @lithub.
* George Orwell, 1984
###
As we contemplate cogitation and communication, we might spare a thought for Sigismund Schlomo “Sigmund” Freud; he died on this date in 1939. A neurologist, he was the founder of psychoanalysis– a clinical method for evaluating and treating pathologies seen as originating from conflicts in the psyche, through dialogue between patient and psychoanalyst, and the distinctive theory of mind and human agency derived from it.
Freud’s psychoanalysis further complicated our thinking about language: In his theory dreams are instigated by the daily occurrences and thoughts of everyday life. In what Freud called the “dream-work”, these “secondary process” thoughts (“word presentations”), governed by the rules of language and the reality principle, become subject to the “primary process” of unconscious thought (“thing presentations”) governed by the pleasure principle, wish gratification, and the repressed sexual scenarios of childhood.
Jacques Lacan built on Freud’s approach, emphasizing linguistics and literature. Lacan believed that most of Freud’s essential work had been done before 1905 and concerned the interpretation of dreams, neurotic symptoms, and slips, which had been based on a revolutionary way of understanding language and its relation to experience and subjectivity, and that ego psychology and object relations theory were based upon misreadings of Freud’s work. For Lacan (as, in a way, for the author above), the determinative dimension of human experience is neither the self (as in ego psychology) nor relations with others (as in object relations theory), but language.
“I dream. Sometimes I think that’s the only right thing to do”*…
Why do we need art? And what does it have to do with dreaming? Neuroscientist and author Eric Hoel has a very provocative theory…
How will we spend the remaining 700,000 hours of the 21st century? In the metered time of our own discretion, there have never been more options for our personal entertainment, nor have they ever been more freely available. We find ourselves strolling the aisles of a vast sensorium. On the shelves is a trove of experiences: video games, movies, TV shows, virtual reality, books, podcasts, articles, social media posts, all prepackaged for our consumption. What had previously been accomplished for food through the centralized distribution of supermarkets has now been done with experience itself. The recent grand opening of this supersensorium has been mediated through the screen, a panoply of icons, images, links, downloads, and videos auto-playing, which we browse through entirely at our leisure.
Such abundance of choice would have been heralded as miraculous in any other age. What a rousing cry for progress that our lowly living rooms would have stupefied with their luxuries even the God-like pharaohs, even the court of Versailles! Or maybe not—for it all comes with a price. Who hasn’t lost days from binge-watching Netflix, or deep in the dungeons of some video game? Here’s a scary, or maybe heart-wrenching, thing to consider: of our waking leisure hours, what exactly is the amount of time devoted to the consumption of experiences from the supersensorium? In 2018, Nielsen reported that the average American spent eleven hours a day engaged with media. Does anyone believe that this number is going to decrease? For the technology that undergirds the supersensorium will only improve. The algorithms will grow more personalized, the experiences will become more salient, and the platforms will get faster in their delivery of content. And we should all admit that the vast majority of what lines the shelves of the supersensorium is merely entertainment, for otherwise we wouldn’t feel a gnawing guilt so great most of us avoid consciously calculating how our time is actually spent.
The infinite entertainment of the supersensorium is especially problematic if you happen to be someone who likes and maybe even produces art or fictions. E.g., a writer such as myself, who views the tidal wave of middling fictions with a feeling akin to terror. Not that these problems are entirely new. In a letter to a friend, a 31-year-old Tolstoy wrote:
I shall write no more fiction. It is shameful, when you come to think of it. People are weeping, dying, marrying, and I should sit down and write books telling “how she loved him”? It’s shameful!
If that was Tolstoy’s judgment of himself, what might his fiery judgment be of our now endless ways of telling “how she loved him”? The mere scale of the supersensorium pushes to the fore old questions about the purpose of art and fictions. Why do humans desire these petite narratives we gobble up like treats? What’s the origin of this pull toward artifice, a thing so powerful we might even call it an instinct? Is it virtue or vice? And if it can be a vice and technology is making it easier and easier to while away our lives this way, a reasonable person has to ask: why add to the supersensorium? Why take away from the real when the real is already back on its heels, and behind it, a cliff?…
It turns out, Hoel suggests, that the answers have everything to do with dreaming…
To explain the phenomenology of dreams I recently outlined a scientific theory called the Overfitted Brain Hypothesis (OBH). The OBH posits that dreams are an evolved mechanism to avoid a phenomenon called overfitting. Overfitting, a statistical concept, is when a neural network learns overly specifically, and therefore stops being generalizable. It learns too well. For instance, artificial neural networks have a training data set: the data that they learn from. All training sets are finite, and often the data comes from the same source and is highly correlated in some non-obvious way. Because of this, artificial neural networks are in constant danger of becoming overfitted. When a network becomes overfitted, it will be good at dealing with the training data set but will fail at data sets it hasn’t seen before. All learning is basically a tradeoff between specificity and generality in this manner. Real brains, in turn, rely on the training set of lived life. However, that set is limited in many ways, highly correlated in many ways. Life alone is not a sufficient training set for the brain, and relying solely on it likely leads to overfitting…
What the OBH suggests is that dreams represent the biological version of a combination of such techniques, a form of augmentation or regularization that occurs after the day’s learning—but the point is not to enforce the day’s memories, but rather combat the detrimental effects of their memorization. Dreams warp and play with always-ossifying cognitive and perceptual categories, stress-testing and refining. The inner fabulist shakes up the categories of the plastic brain. The fight against overfitting every night creates a cyclical process of annealing: during wake the brain fits to its environment via learning, then, during sleep, the brain “heats up” through dreams that prevent it from clinging to suboptimal solutions and models and incorrect associations.
The OBH fits with the evidence from human sleep research: sleep seems to be associated not so much with assisting pure memorization, as other hypotheses about dreams would posit, but with an increase in abstraction and generalization. There’s also the famous connection between dreams and creativity, which also fits with the OBH. Additionally, if you stay awake too long you will begin to hallucinate (perhaps because your perceptual processes are becoming overfitted). Most importantly, the OBH explains why dreams are so, well, dreamlike.
… and everything to do with the role that it plays in our lives– and in shaping the media and entertainment that we consume…
From an evolutionary perspective, it’s rather amazing humans are willing to spend so much time on fictions… Why are we so fascinated by things that never happened?
If the OBH is true, then it is very possible writers and artists, not to mention the entirety of the entertainment industry, are in the business of producing what are essentially consumable, portable, durable dreams. Literally. Novels, movies, TV shows—it is easy for us to suspend our disbelief because we are biologically programmed to surrender it when we sleep. I don’t think it’s a coincidence that a TV episode traditionally lasts about the same ~30 minutes in length as the average REM event, and movies last ~90 minutes, an entire sleep cycle (and remember, we dream sometimes in NREM too). They are dream substitutions.
This hypothesized connection explains why humans find the directed dreams we call “fictions” and “art” so attractive and also reveals their purpose: they are artificial means of accomplishing the same thing naturally occurring dreams do. Just like dreams, fictions and art keep us from overfitting our perception, models, and understanding of the world…
And as you’ll see if you read this piece in full, as I hope you will, the implication is that art– real art, good art– matters…
… as the supersensorium expands over more and more of our waking hours, the idea of an aesthetic spectrum, with art on one end and entertainment on the other, is defunct. In fact, explicitly promoting any difference between entertainment and art is considered a product of a bygone age, even a tool of oppression and elitism. At best, the distinction is an embarrassing form of noblesse oblige. One could give a long historical answer about how exactly we got into this cultural headspace, maybe starting with postmodernism and deconstructionism, then moving on to the problematization of the canon, or the saturation of pop culture in academia to feed the more and more degrees, we could trace the ideas, catalog the opinions of the cultural powerbrokers, we could focus on new media and technologies muscling for attention, or changing demographics and work forces and leisure time, or so many other things—but none of it matters. What matters is, now, as it stands, talking about art as being fundamentally different from entertainment brings charges of classism, snobbishness, elitism—of being proscriptive, boring, and stuffy.
And without a belief in some sort of lowbrow-highbrow spectrum of aesthetics, there is no corresponding justification of a spectrum of media consumption habits. Imagine two alien civilizations, both at roughly our own stage of civilization, both with humanity’s innate drive to consume artificial experiences and narratives. One is a culture that scoffs at the notion of art. The other is aesthetically sensitive and even judgmental. Which weathers the storm of the encroaching supersensorium, with its hyper-addictive superstimuli? When the eleven hours a day becomes thirteen, becomes fifteen? A belief in an aesthetic spectrum may be all that keeps a civilization from disappearing up its own brainstem.
In a world of infinite experience, it is the aesthete who is safest, not the ascetic. Abstinence will not work. The only cure for too much fiction is good fiction. Artful fictions are, by their very nature, rare and difficult to produce. In turn, their rarity justifies their existence and promotion. It’s difficult to overeat on caviar alone. Now, it’s important to note here that I don’t mean that art can’t be entertaining, nor that it’s restricted to a certain medium. But art always refuses to be easily assimilated into the supersensorium.
…
…only by upholding art can we champion the consumption of art. Which is so desperately needed because only art is the counterforce judo for entertainment’s stranglehold on our stone-age brains. And as the latter force gets stronger, we need the former more and more.
So in your own habits of consumption, hold on to art. It will deliver you through this century…
The neuroscientific case for art in the age of Netflix: “Exit the supersensorium,” from @erikphoel.
* Haruki Murakami
###
As we dream on, we might send birthday greetings to Konstantin Yuon; he was born on this date in 1875. A painter and theater designer, he was involved with Mir Iskusstva, the Russian magazine, and with the artistic movement it inspired and embodied, which was a major influence on the Russians who helped revolutionize European art during the first decade of the 20th century. Later, he co-founded the Union of Russian Artists and the Association of Artists of Revolutionary Russia.
“To sleep: perchance to dream: ay, there’s the rub”*…
I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.
I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.
But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.
Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.
Bodies and societies are important, poorly understood and deeply mysterious.
Take sleep. Sleep is very weird.
Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.
But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.
Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.
DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.
So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!
Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).
To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.
For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”
That’s overfitting (and researchers who do this are assholes).
Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.
To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.
And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.
Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…
Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.
(Image above: Gontzal García del Caño, CC BY-NC-SA, modified)
* Shakespeare, Hamlet
###
As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.











You must be logged in to post a comment.