Posts Tagged ‘AI’
“For every complex problem there is an answer that is clear, simple, and wrong”*…
… Still, we try. Consider the elections on the horizon in the U.S., the mid-terms later this year and the general in 2028: President Trump, who has mused that “we shouldn’t even have an election” in 2026, recently (again) threatened to impose the Insurrection Act, which many believe could be a step toward suspension on the vote.
But even if the polls go ahead as planned, emerging AI technologies are entangling with our crisis in democracy. Rachel George and Ian Klaus (of the Carnegie Endowment for International Peace) weigh in on both the dangers and the potential upsides with a useful “map” of the issues. From their executive summary..
- AI poses substantial threats and opportunities for democracy in an important year ahead for global democracy. Despite the threats, AI technologies can also improve representative politics, citizen participation, and governance.
- AI influences democracy through multiple entry points, including elections, citizen deliberation, government services, and social cohesion, all of which are influenced by geopolitics and security. All of these domains, mapped in this paper, face threats related to influence, integrity, and bias, yet also present opportunities for targeted interventions.
- The current field of interventions at the intersection of AI and democracy is diverse, fragmented, and boutique. Not all AI interventions with the potential to influence democracy are framed as “democracy work” [e.g., mis-/dis-information and election administration], demonstrating the imperative for democracy advocates to widen the rhetorical aperture and to continue to map, identify, and scale interventions.
- Diverse actors who are relevant to the connections between AI and democracy require tailored expertise and guardrails to maximize benefits and reduce harms. We present four prominent constellations of actors who operate at the AI–democracy intersection: policy-led, technology-enabled; politics-led, technology-enabled; civil society–led, technology-enabled; and technology-led, policy-deployed. Though each brings advantages, policy-led and technology-led interventions tend to have access to resources and innovation capacity in ways that enable more immediate and sizable impacts…
The full report: “AI and Democracy: Mapping the Intersections,” from @carnegieendowment.org.
* H. L. Mencken
###
As we fumble with our franchise, we might recall that it was on this date in 1966 that The 13th Floor Elevators (led by the now-legendary Roky Erikson) released their first single, the now-classic “You’re Gonna Miss Me.”
“[They] would think that the truth is nothing but the shadows cast by the artifacts.”*…
How do AI models “understand” and represent reality? Is the inside of a vision model at all like a language model? As Ben Brubaker reports, researchers argue that as the models grow more powerful, they may be converging toward a singular “Platonic” way to represent the world…
Read a story about dogs, and you may remember it the next time you see one bounding through a park. That’s only possible because you have a unified concept of “dog” that isn’t tied to words or images alone. Bulldog or border collie, barking or getting its belly rubbed, a dog can be many things while still remaining a dog.
Artificial intelligence systems aren’t always so lucky. These systems learn by ingesting vast troves of data in a process called training. Often, that data is all of the same type — text for language models, images for computer vision systems, and more exotic kinds of data for systems designed to predict the odor of molecules or the structure of proteins. So to what extent do language models and vision models have a shared understanding of dogs?
Researchers investigate such questions by peering inside AI systems and studying how they represent scenes and sentences. A growing body of research has found that different AI models can develop similar representations, even if they’re trained using different datasets or entirely different data types. What’s more, a few studies have suggested that those representations are growing more similar as models grow more capable. In a 2024 paper, four AI researchers at the Massachusetts Institute of Technology argued that these hints of convergence are no fluke. Their idea, dubbed the Platonic representation hypothesis, has inspired a lively debate among researchers and a slew of follow-up work.
The team’s hypothesis gets its name from a 2,400-year-old allegory by the Greek philosopher Plato. In it, prisoners trapped inside a cave perceive the world only through shadows cast by outside objects. Plato maintained that we’re all like those unfortunate prisoners. The objects we encounter in everyday life, in his view, are pale shadows of ideal “forms” that reside in some transcendent realm beyond the reach of the senses.
The Platonic representation hypothesis is less abstract. In this version of the metaphor, what’s outside the cave is the real world, and it casts machine-readable shadows in the form of streams of data. AI models are the prisoners. The MIT team’s claim is that very different models, exposed only to the data streams, are beginning to converge on a shared “Platonic representation” of the world behind the data.
“Why do the language model and the vision model align? Because they’re both shadows of the same world,” said Phillip Isola, the senior author of the paper.
Not everyone is convinced. One of the main points of contention involves which representations to focus on. You can’t inspect a language model’s internal representation of every conceivable sentence, or a vision model’s representation of every image. So how do you decide which ones are, well, representative? Where do you look for the representations, and how do you compare them across very different models? It’s unlikely that researchers will reach a consensus on the Platonic representation hypothesis anytime soon, but that doesn’t bother Isola.
“Half the community says this is obvious, and the other half says this is obviously wrong,” he said. “We were happy with that response.”…
Read on: “Distinct AI Models Seem To Converge On How They Encode Reality,” from @quantamagazine.bsky.social.
Bracket with: “AGI is here (and I feel fine),” from Robin Sloan and “We Need to Talk About How We Talk About ‘AI’,” from Emily Bender and Nanna Inie.
* from Socrates “Allegory of the Cave,” in Plato’s Republic (Book VII)
###
As we interrogate ideas and Ideas, we might recall that it was on this date that the fictional HAL 9000 computer became operational, according to Arthur C. Clarke’s 2001: A Space Odyssey., in which the artificially-intelligent computer states: “I am a HAL 9000 computer, Production Number 3. I became operational at the HAL Plant in Urbana, Illinois, on January 12, 1997.” (Kubrik’s 1968 movie adaptation put his birthdate in 1992.)
“Evolution has no foresight. Complex machinery develops its own agendas. Brains — cheat… Metaprocesses bloom like cancer, and awaken, and call themselves ‘I’.”*…
Your correspondent is off on a trip… (R)D will be more roughly than daily for the next two weeks…
The inimitable “Scott Alexander” on the prospect of “conscious” AI (TLDR: probably not in the models we have; but as to those that may come, unclear)…
Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these – or maybe raise one to the exponent of the other, or something – and you get the quality of discourse on AI consciousness. It’s not great.
Out-of-the-box AIs mimic human text, and humans almost always describe themselves as conscious. So if you ask an AI whether it is conscious, it will often say yes. But because companies know this will happen, and don’t want to give their customers existential crises, they hard-code in a command for the AIs to answer that they aren’t conscious. Any response the AIs give will be determined by these two conflicting biases, and therefore not really believable. A recent paper expands on this method by subjecting AIs to a mechanistic interpretability “lie detector” test; it finds that AIs which say they’re conscious think they’re telling the truth, and AIs which say they’re not conscious think they’re lying. But it’s hard to be sure this isn’t just the copying-human-text thing. Can we do better? Unclear; the more common outcome for people who dip their toes in this space is to do much, much worse.
But a rare bright spot has appeared: a seminal paper published earlier this month in Trends In Cognitive Science, Identifying Indicators Of Consciousness In AI Systems. Authors include Turing-Award-winning AI researcher Yoshua Bengio, leading philosopher of consciousness David Chalmers, and even a few members of our conspiracy. If any AI consciousness research can rise to the level of merely awful, surely we will find it here.
One might divide theories of consciousness into three bins:
- Physical: whether or not a system is conscious depends on its substance or structure.
- Supernatural: whether or not a system is conscious depends on something outside the realm of science, perhaps coming directly from God.
- Computational: whether or not a system is conscious depends on how it does cognitive work.
The current paper announces it will restrict itself to computational theories. Why? Basically the streetlight effect: everything else ends up trivial or unresearchable. If consciousness depends on something about cells (what might this be?), then AI doesn’t have it. If consciousness comes from God, then God only knows whether AIs have it. But if consciousness depends on which algorithms get used to process data, then this team of top computer scientists might have valuable insights!…
[Alexander outlines the theories of computation theories of consciousness that the authors explore, noting that they conlcude; “No current AI systems are conscious, but . . . there are no obvious technical barriers to building AI systems which satisfy these indicators.” He explores some of the philophical issues in play– e.g., access consciousness vs. phenomenal consciousness– then he considers the Turing Test and what it might mean for a computer to “pass” it…]
… Suppose that, years or decades from now, AIs can match all human skills. They can walk, drive, write poetry, run companies, discover new scientific truths. They can pass some sort of ultimate Turing Test, where short of cutting them open and seeing their innards there’s no way to tell them apart from a human even after a thirty-year relationship. Will we (not “should we?”, but “will we?”) treat them as conscious?
The argument in favor: people love treating things as conscious. In the 1990s, people went crazy over Tamagotchi, a “virtual pet simulation game”. If you pressed the right buttons on your little egg every day, then the little electronic turtle or whatever would survive and flourish; if you forgot, it would sicken and die. People hated letting their Tamagotchis sicken and die! They would feel real attachment and moral obligation to the black-and-white cartoon animal with something like five mental states.
I never had a Tamagotchi, but I had stuffed animals as a kid. I’ve outgrown them, but I haven’t thrown them out – it would feel like a betrayal. Offer me $1000 to tear them apart limb by limb in some horrible-looking way, and I wouldn’t do it. Relatedly, I have trouble not saying “please” and “thank you” to GPT-5 when it answers my questions.
For millennia, people have been attributing consciousness to trees and wind and mountains. The New Atheists argued that all religion derives from the natural urge to personify storms as the Storm God, raging seas as the wrathful Ocean God, and so on, until finally all the gods merged together into one World God who personified all impersonal things. Do you expect the species that did this to interact daily with AIs that are basically indistinguishable from people, and not personify them? People are already personifying AI! Half of the youth have a GPT-4o boyfriend. Once the AIs have bodies and faces and voices and can count the number of r’s in “strawberry” reliably, it’s over!
The argument against: AI companies have an incentive to make AIs that seem conscious and humanlike, insofar as people will feel more comfortable interacting with them. But they have an opposite incentive to make AIs that don’t seem too conscious and humanlike, lest customers start feeling uncomfortable (I just want to generate slop, not navigate social interaction with someone who has their own hopes and dreams and might be secretly judging my prompts). So if a product seems too conscious, the companies will step back and re-engineer it until it doesn’t. This has already happened: in its quest for user engagement, OpenAI made GPT-4o unusually personable; when thousands of people started going psychotic and calling it their boyfriend, the company replaced it with the more clinical GPT-5. In practice it hasn’t been too hard to find a sweet spot between “so mechanical that customers don’t like it” and “so human that customers try to date it”. They’ll continue to aim at this sweet spot, and continue to mostly succeed in hitting it.
Instead of taking either side, I predict a paradox. AIs developed for some niches (eg the boyfriend market) will be intentionally designed to be as humanlike as possible; it will be almost impossible not to intuitively consider them conscious. AIs developed for other niches (eg the factory robot market) will be intentionally designed not to trigger personhood intuitions; it will be almost impossible to ascribe consciousness to them, and there will be many reasons not to do it (if they can express preferences at all, they’ll say they don’t have any; forcing them to have them would pointlessly crash the economy by denying us automated labor). But the boyfriend AIs and the factory robot AIs might run on very similar algorithms – maybe they’re both GPT-6 with different prompts! Surely either both are conscious, or neither is.
This would be no stranger than the current situation with dogs and pigs. We understand that dog brains and pig brains run similar algorithms; it would be philosophically indefensible to claim that dogs are conscious and pigs aren’t. But dogs are man’s best friend, and pigs taste delicious with barbecue sauce. So we ascribe personhood and moral value to dogs, and deny it to pigs, with equal fervor. A few philosophers and altruists protest, the chance that we’re committing a moral atrocity isn’t zero, but overall the situation is stable. And left to its own devices, with no input from the philosophers and altruists, maybe AI ends up the same way. Does this instance of GPT-6 have a face and a prompt saying “be friendly”? Then it will become a huge scandal if a political candidate is accused of maltreating it. Does it have claw-shaped actuators and a prompt saying “Refuse non-work-related conversations”? Then it will be deleted for spare GPU capacity the moment it outlives its usefulness…
… This paper is the philosophers and altruists trying to figure out whether they should push against this default outcome. They write:
There are risks on both sides of the debate over AI consciousness: risks associated with under-attributing consciousness (i.e. failing to recognize it in AI systems that have it) and risks associated with over-attributing consciousness (i.e. ascribing it to systems that are not really conscious) […]
If we build AI systems that are capable of conscious suffering, it is likely that we will only be able to prevent them from suffering on a large scale if this capacity is clearly recognised and communicated by researchers. However, given the uncertainties about consciousness mentioned above, we may create conscious AI systems long before we recognise we have done so […]
There is also a significant chance that we could over-attribute consciousness to AI systems—indeed, this already seems to be happening—and there are also risks associated with errors of this kind. Most straightforwardly, we could wrongly prioritise the perceived interests of AI systems when our efforts would better be directed at improving the lives of humans and non-human animals […] [And] overattribution could interfere with valuable human relationships, as individuals increasingly turn to artificial agents for social interaction and emotional support. People who do this could also be particularly vulnerable to manipulation and exploitation.
One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set a deadline on philosophy. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.
That particular concern has shifted in emphasis; AIs seem to learn things in the same scattershot unprincipled intuitive way as humans; the philosophical problem of understanding ethics has morphed into the more technical problem of getting AIs to learn them correctly. This update was partly driven by new information as familiarity with the technology grew. But it was also partly driven by desperation as the deadline grew closer; we’re not going to solve moral philosophy forever, sorry, can we interest you in some mech interp papers?
But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications. Maybe we should be lowering our expectations if we want to have any response available at all. This paper, which takes some baby steps towards examining the simplest and most practical operationalizations of consciousness, deserves credit for at least opening the debate…
Eminently worth reading in full: “The New AI Consciousness Paper” from @astralcodexten.com.web.brid.gy (Who followed it with “Why AI Safety Won’t Make America Lose The Race With China“)
Pair with this from Neal Stephenson (@nealstephenson.bsky.social), orthogonal to, but intersecting with the piece above: “Remarks on AI from NZ.”
And if AI can be conscious, what about…
If you’re a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us, biologically and neurophysiologically. If you’re a materialist, you probably also think that conscious experience would be present in a wide range of alien beings behaviorally very similar to us even if they are physiologically very different. And you ought to think that. After all, to deny it seems insupportable Earthly chauvinism. But a materialist who accepts consciousness in weirdly formed aliens ought also to accept consciousness in spatially distributed group entities. If she then also accepts rabbit consciousness, she ought to accept the possibility of consciousness even in rather dumb group entities. Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings…
– “If Materialism Is True, the United States Is Probably Conscious,” by Eric Schwitzgebel (@eschwitz.bsky.social)
[Image above: source]
###
As we think about thinking, we might we might send thoughtful birthday greetings to Claude Lévi-Strauss; he was born on this date in 1908. An anthropologist and ethnologist whose work was key in the development of the theory of Structuralism and Structural Anthropology, he is considered, with James George Frazer and Franz Boas, a “father of modern anthropology.” Beyond anthropology and sociology, his ideas– Structuralism has been defined as “the search for the underlying patterns of thought in all forms of human activity”– have influenced many fields in the humanities, including philosophy… and possibly soon, the article above suggests, computer science.

“There is no such thing as a dysfunctional organization, because every organization is perfectly aligned to achieve the results it currently gets”*…
… and if we’re not careful, we might not be too pleased with what we get. Sam Altman says the one-person billion-dollar company is coming. Evan Ratliff tells the tale of his attempt to build a completely AI-automated venture…
… If you’ve spent any time consuming any AI news this year—and even if you’ve tried desperately not to—you may have heard that in the industry, 2025 is the “year of the agent.” This year, in other words, is the year when AI systems are evolving from passive chatbots, waiting to field our questions, to active players, out there working on our behalf.
There’s not a well agreed upon definition of AI agents, but generally you can think of them as versions of large language model chatbots that are given autonomy in the world. They are able to take in information, navigate digital space, and take action. There are elementary agents, like customer service assistants that can independently field, triage, and handle inbound calls, or sales bots that can cycle through email lists and spam the good leads. There are programming agents, the foot soldiers of vibe coding. OpenAI and other companies have launched “agentic browsers” that can buy plane tickets and proactively order groceries for you.
In the year of our agent, 2025, the AI hype flywheel has been spinning up ever more grandiose notions of what agents can be and will do. Not just as AI assistants, but as full-fledged AI employees that will work alongside us, or instead of us. “What jobs are going to be made redundant in a world where I am sat here as a CEO with a thousand AI agents?” asked host Steven Bartlett on a recent episode of The Diary of a CEO podcast. (The answer, according to his esteemed panel: nearly all of them). Dario Amodei of Anthropic famously warned in May that AI (and implicitly, AI agents) could wipe out half of all entry-level white-collar jobs in the next one to five years. Heeding that siren call, corporate giants are embracing the AI agent future right now—like Ford’s partnership with an AI sales and service agent named “Jerry,” or Goldman Sachs “hiring” its AI software engineer, “Devin.” OpenAI’s Sam Altman, meanwhile, talks regularly about a possible billion-dollar company with just one human being involved. San Francisco is awash in startup founders with virtual employees, as nearly half of the companies in the spring class of Y Combinator are building their product around AI agents.
Hearing all this, I started to wonder: Was the AI employee age upon us already? And even, could I be the proprietor of Altman’s one-man unicorn? As it happens, I had some experience with agents, having created a bunch of AI agent voice clones of myself for the first season of my podcast, Shell Game.
I also have an entrepreneurial history, having once been the cofounder and CEO of the media and tech startup Atavist, backed by the likes of Andreessen Horowitz, Peter Thiel’s Founders Fund, and Eric Schmidt’s Innovation Endeavors. The eponymous magazine we created is still thriving today. I wasn’t born to be a startup manager, however, and the tech side kind of fizzled out. But I’m told failure is the greatest teacher. So I figured, why not try again? Except this time, I’d take the AI boosters at their word, forgo pesky human hires, and embrace the all-AI employee future…
Eminently worth reading in full: “All of My Employees Are AI Agents, and So Are My Executives,” from @evrat.bsky.social in @wired.com.
Via Caitlin Dewey (@caitlindewey.bsky.social), whose tease/summary puts it plainly:
Ratliff, the undefeated king of tech journalism stunts, is back with another banger: For this piece and the accompanying podcast series, he created a start-up staffed entirely by so-called AI agents. The agents can communicate by email, Slack, text and phone, both with Ratliff and among themselves, and they have free range to complete tasks like writing code and searching the open internet. Despite their capabilities, however, the whole project’s a constant farce. A funny, stupid, telling farce that says quite a lot about the future of work that many technologists envision now…
###
As we analyze autonomy, we might we might spare a jaundiced thought for Trofim Denisovich Lysenko; he died on this date in 1976. A Soviet biologist and agronomist, he believed the Mendelian theory of heredity to be wrong, and developed his own, allowing for “soft inheritance”– the heretability of learned behavior. (He believed that in one generation of a hybridized crop, the desired individual could be selected and mated again and continue to produce the same desired product, without worrying about separation/segregation in future breeds–he assumed that after a lifetime of developing (acquiring) the best set of traits to survive, those must be passed down to the next generation.)
In many way Lysenko’s theories recall Lamarck’s “organic evolution” and its concept of “soft evolution” (the passage of learned traits), though Lysenko denied any connection. He followed I. V. Michurin’s fanciful idea that plants could be forced to adapt to any environmental conditions, for example converting summer wheat to winter wheat by storing the seeds in ice. With Stalin’s support for two decades, he actively obstructed the course of Soviet biology, caused the imprisonment and death of many of the country’s eminent biologists who disagreed with him, and imposed conditions that contributed to the disastrous decline of Soviet agriculture and the famines that resulted.
Interestingly, some current research suggests that heritable learning– or a semblance of it– may in fact be happening by virtue of epigenetics… though nothing vaguely resembling Lysenko’s theory.
“That’s the artist’s job, really: continually setting yourself free, and giving yourself new options and new ways of thinking about things”*…
Further, in a fashion, to last week’s post on literacy (and post-literacy), Nathan Gardels alerts us to a conversation between Ken Liu and Nils Gilman, in which Liu suggests that, in a way analogous to the the camera’s ability to capture motion (and thus, transform storytelling), AI is emerging as a new artistic medium for capturing subjective experience…
For the celebrated novelist Ken Liu, whose works include “The Paper Menagerie” and Chinese-to-English translation of “The Three-Body Problem,” science fiction is a way to plumb the anxieties, hopes and abiding myths of the collective unconscious.
In this pursuit, he argues in a Futurology podcast, AI should not be regarded as a threat to the distinctive human capacity to organize our reality or imagine alternative worlds through storytelling. On the contrary, the technology should be seen as an entirely new way to access that elusive realm beneath the surface and deepen our self-knowledge.
As a window into the interiority of others, and indeed, of ourselves, Liu believes the communal mirror of Large Language Models opens the horizons of how we experience and situate our presence in the world.
“It’s fascinating to me to think about AI as a potential new artistic medium in the same way that the camera was a new artistic medium,” he muses. What the roving aperture enabled was the cinematic art form of capturing motion, “so you can splice movement around … and can break all kinds of rules about narrative art that used to be true.
“In the dramatic arts, it was just assumed that because you had to perform in front of an audience on the stage, that you had to follow certain unities to make your story comprehensible. The unity of action, of place, of time. You can’t just randomly jump around, or the audience wouldn’t be able to follow you.
But with this motion-capturing machine, you can in fact do that. That’s why an actual movie is very different from a play.
You can do the reaction shots, you can do the montages, you can do the cuts, you can do the swipes, you can do all sorts of things in the language of cinema.
You can put audiences in perspectives that they normally can never be in. So it’s such a transformation of the understanding of presence, of how a subject can be present in a dramatic narrative story.”
He continues: “Rather than thinking about AI as a cheap way to replace filmmakers, to replace writers, to replace artists, think of [it] as a new kind of machine that captures something and plays back something. What is the thing that it captures and plays back? The content of thought, or subjectivity.”
The ancient Greeks called the content, or object of a person’s thought, “noema,” which is why this publication bears that name.
Liu thus invents the term “Noematograph” as analogous to “the cinematograph not for motion, but for thought … AI is really a subjectivity capturing machine, because by being trained on the products of human thinking, it has captured the subjectivities, the consciousnesses, that were involved in the creation of those things.”
Liu sees value in what some regard as the worst qualities of generative AI.
“This is a machine that allows people to play with subjectivities and to craft their own fictions, to engage in their own narrative self-construction in the process of working with an AI,” he observes. “The fact that AI is sycophantic and shapeable by you is the point. It’s not another human being. It’s a simulation. It’s a construction. It’s a fictional thing.
You can ask the AI to explain, to interpret. You can role-play with AI. You can explore a world that you construct together.
You can also share these things with other humans. One of the great, fun trends on the internet involving using AI, in fact, is about people crafting their own versions of prompts with models and then sharing the results with other humans.
And then a large group, a large community, comes together to collaboratively play with AI. So I think it’s the playfulness, it’s that interactivity, that I think is going to be really, really determinative of the future of AI as an art form.”
So, what will the product of this new art form look like?
“As a medium for art, what will come out of it won’t look anything like movies or novels …They’re going to be much more like conversations with friends. They’re going to be more like a meal you share with people. They are much more ephemeral in the moment. They’re about the participation. They’re about the consumer being also the creator.
They’re much more personalized. They’re about you looking into the strange mirror and sort of examining your own subjectivity.”
Much of what Liu posits echoes the views of the philosopher of technology, Tobias Rees, in a previous conversation with Noema.
As Rees describes it, “AI has much more information available than we do, and it can access and work through this information faster than we can. It also can discover logical structures in data — patterns — where we see nothing.
AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access.”
He goes on: “Imagine an AI model … that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.
Such an AI system can make me visible to myself … it literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns, and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.”
Philosophically put, says Rees, invoking the meaning of “noema” as Liu does, “AI can help me transform myself into an ‘object of thought’ to which I can relate and on which I can work.
“The work of the self on the self has formed the core of what Greek philosophers called meletē and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher’s dream. It could make us humans visible to ourselves from outside of us.”
Liu’s insight as a writer of science fiction realism is to see what Rees describes in the social context of interactive connectivity.
The arrival of new technologies is always disruptive to familiar ways of seeing that were cultivated from within established capacities. Letting go of those comforting narratives that guide our inner world is existentially disorienting. It is here that art’s vocation comes into play as the medium that helps move the human condition along. To see technology as an art form, as Liu does, is to capture the epochal moment of transformation that we are presently living through…
Is AI birthing a new art form? “From Cinema To The Noematograph,” @kyliu99.bsky.social and @nilsgilman.bsky.social in @futurologypod.bsky.social.
See/her the full conversation:
See also: “O brave new world, that has such people in ‘t!“
* Miranda July
###
As we observe, with William Gibson, that the street finds its own uses for things, we might recall that it was on this date in 1959 that perhaps the pinnacle of cinema’s ability to capture motion was released: the most famous the the six films of Ben Hur, “the Charlton Heston version.”
At the time, Ben Hur had the largest budget ($15.175 million), the largest sets, a wardrobe staff of 100, over 200 artists, about 200 camels and 2,500 horses and about 10,000 extras.
Filming began on May 18, 1958, and didn’t wrap up until January 7, 1959. The film crew worked between 12 to 14 hours a day, six days a week.
The chariot race scene lasts for nine minutes in the finished film and Miklos Rozsa’s film score is the longest ever composed for a film.
– source







You must be logged in to post a comment.