(Roughly) Daily

Posts Tagged ‘thought

“That’s the artist’s job, really: continually setting yourself free, and giving yourself new options and new ways of thinking about things”*…

A vibrant collage featuring a silhouette of a face overlaid with various images, including crowds, nature, and Hollywood themes, set against a pink background.

Further, in a fashion, to last week’s post on literacy (and post-literacy), Nathan Gardels alerts us to a conversation between Ken Liu and Nils Gilman, in which Liu suggests that, in a way analogous to the the camera’s ability to capture motion (and thus, transform storytelling), AI is emerging as a new artistic medium for capturing subjective experience…

For the celebrated novelist Ken Liu, whose works include “The Paper Menagerie” and Chinese-to-English translation of “The Three-Body Problem,” science fiction is a way to plumb the anxieties, hopes and abiding myths of the collective unconscious.

In this pursuit, he argues in a Futurology podcast, AI should not be regarded as a threat to the distinctive human capacity to organize our reality or imagine alternative worlds through storytelling. On the contrary, the technology should be seen as an entirely new way to access that elusive realm beneath the surface and deepen our self-knowledge.

As a window into the interiority of others, and indeed, of ourselves, Liu believes the communal mirror of Large Language Models opens the horizons of how we experience and situate our presence in the world.

“It’s fascinating to me to think about AI as a potential new artistic medium in the same way that the camera was a new artistic medium,” he muses. What the roving aperture enabled was the cinematic art form of capturing motion, “so you can splice movement around … and can break all kinds of rules about narrative art that used to be true.

“In the dramatic arts, it was just assumed that because you had to perform in front of an audience on the stage, that you had to follow certain unities to make your story comprehensible. The unity of action, of place, of time. You can’t just randomly jump around, or the audience wouldn’t be able to follow you.

But with this motion-capturing machine, you can in fact do that. That’s why an actual movie is very different from a play.

You can do the reaction shots, you can do the montages, you can do the cuts, you can do the swipes, you can do all sorts of things in the language of cinema.

You can put audiences in perspectives that they normally can never be in. So it’s such a transformation of the understanding of presence, of how a subject can be present in a dramatic narrative story.”

He continues: “Rather than thinking about AI as a cheap way to replace filmmakers, to replace writers, to replace artists, think of [it] as a new kind of machine that captures something and plays back something. What is the thing that it captures and plays back? The content of thought, or subjectivity.”

The ancient Greeks called the content, or object of a person’s thought, “noema,” which is why this publication bears that name.

Liu thus invents the term “Noematograph” as analogous to “the cinematograph not for motion, but for thought … AI is really a subjectivity capturing machine, because by being trained on the products of human thinking, it has captured the subjectivities, the consciousnesses, that were involved in the creation of those things.”

Liu sees value in what some regard as the worst qualities of generative AI.

“This is a machine that allows people to play with subjectivities and to craft their own fictions, to engage in their own narrative self-construction in the process of working with an AI,” he observes. “The fact that AI is sycophantic and shapeable by you is the point. It’s not another human being. It’s a simulation. It’s a construction. It’s a fictional thing.

You can ask the AI to explain, to interpret. You can role-play with AI. You can explore a world that you construct together.

You can also share these things with other humans. One of the great, fun trends on the internet involving using AI, in fact, is about people crafting their own versions of prompts with models and then sharing the results with other humans.

And then a large group, a large community, comes together to collaboratively play with AI. So I think it’s the playfulness, it’s that interactivity, that I think is going to be really, really determinative of the future of AI as an art form.”

So, what will the product of this new art form look like?

“As a medium for art, what will come out of it won’t look anything like movies or novels …They’re going to be much more like conversations with friends. They’re going to be more like a meal you share with people. They are much more ephemeral in the moment. They’re about the participation. They’re about the consumer being also the creator.

They’re much more personalized. They’re about you looking into the strange mirror and sort of examining your own subjectivity.”

Much of what Liu posits echoes the views of the philosopher of technology, Tobias Rees, in a previous conversation with Noema.

As Rees describes it, “AI has much more information available than we do, and it can access and work through this information faster than we can. It also can discover logical structures in data — patterns — where we see nothing.

AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access.”

He goes on: “Imagine an AI model … that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.

Such an AI system can make me visible to myself … it literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns, and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.”

Philosophically put, says Rees, invoking the meaning of “noema” as Liu does, “AI can help me transform myself into an ‘object of thought’ to which I can relate and on which I can work.

“The work of the self on the self has formed the core of what Greek philosophers called meletē and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher’s dream. It could make us humans visible to ourselves from outside of us.”

Liu’s insight as a writer of science fiction realism is to see what Rees describes in the social context of interactive connectivity.

The arrival of new technologies is always disruptive to familiar ways of seeing that were cultivated from within established capacities. Letting go of those comforting narratives that guide our inner world is existentially disorienting. It is here that art’s vocation comes into play as the medium that helps move the human condition along. To see technology as an art form, as Liu does, is to capture the epochal moment of transformation that we are presently living through…

Is AI birthing a new art form? “From Cinema To The Noematograph,” @kyliu99.bsky.social and @nilsgilman.bsky.social in @futurologypod.bsky.social.

See/her the full conversation:

See also: “O brave new world, that has such people in ‘t!

* Miranda July

###

As we observe, with William Gibson, that the street finds its own uses for things, we might recall that it was on this date in 1959 that perhaps the pinnacle of cinema’s ability to capture motion was released: the most famous the the six films of Ben Hur, “the Charlton Heston version.”

At the time, Ben Hur had the largest budget ($15.175 million), the largest sets, a wardrobe staff of 100, over 200 artists, about 200 camels and 2,500 horses and about 10,000 extras.

Filming began on May 18, 1958, and didn’t wrap up until January 7, 1959. The film crew worked between 12 to 14 hours a day, six days a week.

The chariot race scene lasts for nine minutes in the finished film and Miklos Rozsa’s film score is the longest ever composed for a film.

– source

Written by (Roughly) Daily

November 18, 2025 at 1:00 am

“The limits of my language means the limits of my world”*…

It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…

In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.

But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…

What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?

What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…

There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.

Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.

Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…

Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.

(Image above: source)

* Ludwig Wittgenstein, Tractatus logigo-philosphicus

###

As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.

Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.

source

“I will buckle down to work as soon as I finish reading the Internet”*…

From Aldobrandino da Siena’s Le Régime du corps (1265-70 CE)

Worried that technology is “breaking your brain:? As Joe Stadolnik explains, fears about attention spans and focus are as old as writing itself…

If you suspect that 21st-century technology has broken your brain, it will be reassuring to know that attention spans have never been what they used to be. Even the ancient Roman philosopher Seneca the Younger was worried about new technologies degrading his ability to focus. Sometime during the 1st century CE, he complained that ‘The multitude of books is a distraction’. This concern reappeared again and again over the next millennia. By the 12th century, the Chinese philosopher Zhu Xi saw himself living in a new age of distraction thanks to the technology of print: ‘The reason people today read sloppily is that there are a great many printed texts.’ And in 14th-century Italy, the scholar and poet Petrarch made even stronger claims about the effects of accumulating books:

Believe me, this is not nourishing the mind with literature, but killing and burying it with the weight of things or, perhaps, tormenting it until, frenzied by so many matters, this mind can no longer taste anything, but stares longingly at everything, like Tantalus thirsting in the midst of water.

Technological advances would make things only worse. A torrent of printed texts inspired the Renaissance scholar Erasmus to complain of feeling mobbed by ‘swarms of new books’, while the French theologian Jean Calvin wrote of readers wandering into a ‘confused forest’ of print. That easy and constant redirection from one book to another was feared to be fundamentally changing how the mind worked. Apparently, the modern mind – whether metaphorically undernourished, harassed or disoriented –­ has been in no position to do any serious thinking for a long time.

In the 21st century, digital technologies are inflaming the same old anxieties… and inspiring some new metaphors…

Same as it ever was– a history of the anxieties about attention and memory that new communications technologies have occasioned through history: “We’ve always been distracted,” from @joestadolnik in @aeonmag.

* Stewart Brand @stewartbrand

###

As we learn our way into new media, we might recall that it was it was on this date in 1946 that the first first Washington, D.C. – New York City telecast was accomplished, using AT&T corporation’s coaxial cable; General Dwight Eisenhower was seen to place a wreath at the base of the statue in the Lincoln Memorial and others made brief speeches. The event was judged a success by engineers, although Time magazine called it “as blurred as an early Chaplin movie.”

1946 television (source)

Written by (Roughly) Daily

February 18, 2023 at 1:00 am

“Ultimately, it is the desire, not the desired, that we love”*…

Or is it? The web– and the world– are awash in talk of the Mimetic Theory of Desire (or Rivalry, as its creator, René Girard, would also have it). Stanford professor (and Philosophy Talk co-host) Joshua Landy weights in with a heavy word of caution…

Here are two readings of Shakespeare’s Hamlet. Which do you think we should be teaching in our schools and universities?

Reading 1. Hamlet is unhappy because he, like all of us, has no desires of his own, and therefore has no being, properly speaking. The best he can do is to find another person to emulate, since that’s the only way anyone ever develops the motivation to do anything. Shakespeare’s genius is to show us this life-changing truth.

Reading 2. Hamlet is unhappy because he, like all of us, is full of body thetans, harmful residue of the aliens brought to Earth by Xenu seventy-five million years ago and disintegrated using nuclear bombs inside volcanoes. Since it is still some time until the practice of auditing comes into being, Hamlet has no chance of becoming “clear”; it is no wonder that he displays such melancholy and aimlessness. Shakespeare’s genius is to show us this life-changing truth.

Whatever you make of the first, I’m rather hoping that you feel at least a bit uncomfortable with the second. If so, I have a follow-up question for you: what exactly is wrong with it? Why not rewrite the textbooks so as to make it our standard understanding of Shakespeare’s play? Surely you can’t fault the logic behind it: if humans have indeed been full of body thetans since they came into existence, and Hamlet is a representation of a human being, Hamlet must be full of body thetans. What is more, if everyone is still full of body thetans, then Shakespeare is doing his contemporaries a huge favor by telling them, and the new textbooks will be doing us a huge favor by telling the world. Your worry, presumably, is that this whole body thetan business is just not true. It’s an outlandish hypothesis, with nothing whatsoever to support it. And since, as Carl Sagan once said, “extraordinary claims require extraordinary evidence,” we would do better to leave it alone.

I think you see where I’m going with this. The fact is, of course, that the first reading is just as outlandish as the second. As I’m about to show (not that it should really need showing), human beings do have desires of their own. That doesn’t mean that all our desires are genuine; it’s always possible to be suckered into buying a new pair of boots, regardless of the fact that they are uglier and shoddier than our old ones, just because they are fashionable. What it means is that some of our desires are genuine. And having some genuine desires, and being able to act on them, is sufficient for the achievement of authenticity. For all we care, Hamlet’s inky cloak could be made by Calvin Klein, his feathered hat by Diane von Furstenberg; the point is that he also has motivations (to know things, to be autonomous, to expose guilt, to have his story told accurately) that come from within, and that those are the ones that count.

To my knowledge, no one in the academy actually reads Hamlet (or anything else) the second way. But plenty read works of literature the first way. René Girard, the founder of the approach, was rewarded for doing so with membership in the Académie française, France’s elite intellectual association. People loved his system so much that they established a Colloquium on Violence and Religion, hosted by the University of Innsbruck, complete with a journal under the ironically apt name Contagion. More recently, Peter Thiel, the co-founder of PayPal, loved it so much that he sank millions of dollars into Imitatio, an institute for the dissemination of Girardian thought. And to this day, you’ll find casual references to the idea everywhere, from people who seem to think it’s a truth, one established by René Girard. (Here’s a recent instance from the New York Times opinion pages: “as we have learned from René Girard, this is precisely how desires are born: I desire something by way of imitation, because someone else already has it.”) All of which leads to an inevitable question: what’s the difference between Girardianism and Scientology? Why has the former been more successful in the academy? Why is the madness of theory so, well, contagious?…

Are we really dependent on others for our desires? Does that mechanism inevitably lead to rivalry, scapegoating, and division? @profjoshlandy suggests not: “Deceit, Desire, and the Literature Professor: Why Girardians Exist,” in @StanfordArcade. Via @UriBram in @TheBrowser. Eminently worth reading in full.

* Friedrich Nietzsche (an inspiration to Girard)

###

As we tease apart theorizing, we might spare a thought for William Whewell; he died on this date in 1866. A scientist, Anglican priest, philosopher, theologian, and historian of science, he was Master of Trinity College, Cambridge.

At a time when specialization was increasing, Whewell was renown for the breadth of his work: he published the disciplines of mechanics, physics, geology, astronomy, and economics, while also finding the time to compose poetry, author a Bridgewater Treatise, translate the works of Goethe, and write sermons and theological tracts. In mathematics, Whewell introduced what is now called the Whewell equation, defining the shape of a curve without reference to an arbitrarily chosen coordinate system. He founded mathematical crystallography and developed a revision of  Friedrich Mohs’s classification of minerals. And he organized thousands of volunteers internationally to study ocean tides, in what is now considered one of the first citizen science projects.

But some argue that Whewell’s greatest gift to science was his wordsmithing: He created the words scientist and physicist by analogy with the word artist; they soon replaced the older term natural philosopher. He also named linguisticsconsiliencecatastrophismuniformitarianism, and astigmatism.

Other useful words were coined to help his friends: biometry for John Lubbock; Eocine, Miocene and Pliocene for Charles Lyell; and for Michael Faraday, electrode, anode, cathode, diamagnetic, paramagnetic, and ion (whence the sundry other particle names ending -ion).

source