(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Charisma is not so much getting people to like you as getting people to like themselves when you’re around”*…

Donald Trump and Barak Obama at Trump’s inauguration (source)

Charisma: hard to define, but clear when one encounters it. Joe Zadeh looks at charisma’s history– both as a phenomenon and as a concept– and contemplates its future (spoiler alert– AI figures).

After recounting the story of Stephan George, a German poet and thought leader who was hugely consequential in Germany in the first half of the 20th century, he turns to pioneering sociologist Max Weber, who met George in 1910…

At the time, charisma was an obscure religious concept used mostly in the depths of Christian theology. It had featured almost 2,000 years earlier in the New Testament writings of Paul to describe figures like Jesus and Moses who’d been imbued with God’s power or grace. Paul had borrowed it from the Ancient Greek word “charis,” which more generally denoted someone blessed with the gift of grace. Weber thought charisma shouldn’t be restricted to the early days of Christianity, but rather was a concept that explained a far wider social phenomenon, and he would use it more than a thousand times in his writings. He saw charisma echoing throughout culture and politics, past and present, and especially loudly in the life of Stefan George…

Weber had died in 1920, before George truly reached the height of his powers (and before the wave of totalitarian dictatorships that would define much of the century), but he’d already seen enough to fatten his theory of charisma. At times of crisis, confusion and complexity, Weber thought, our faith in traditional and rational institutions collapses and we look for salvation and redemption in the irrational allure of certain individuals. These individuals break from the ordinary and challenge existing norms and values. Followers of charismatic figures come to view them as “extraordinary,” “superhuman” or even “supernatural” and thrust them to positions of power on a passionate wave of emotion. 

In Weber’s mind, this kind of charismatic power wasn’t just evidenced by accounts of history — of religions and societies formed around prophets, saints, shamans, war heroes, revolutionaries and radicals. It was also echoed in the very stories we tell ourselves — in the tales of mythical heroes like Achilles and Cú Chulainn. 

These charismatic explosions were usually short-lived and unstable — “every hour of its existence brings it nearer to this end,” wrote Weber — but the most potent ones could build worlds and leave behind a legacy of new traditions and values that then became enshrined in more traditional structures of power. In essence, Weber believed, all forms of power started and ended with charisma; it drove the volcanic eruptions of social upheaval. In this theory, he felt he’d uncovered “the creative revolutionary force” of history. 

Weber was not the first to think like this. Similar ideas had been floating around at least as far back as the mid-1700s, when the Scottish philosopher David Hume had written that in the battle between reason and passion, the latter would always win. And it murmured in the 1800s in Thomas Carlyle’s “Great Man Theory” and in Nietzsche’s idea of the “Übermensch.” But none would have quite the global impact of Weber, whose work on charisma would set it on a trajectory to leap the fence of religious studies and become one of the most overused yet least understood words in the English language.

A scientifically sound or generally agreed-upon definition of charisma remains elusive even after all these years of investigation. Across sociology, anthropology, psychology, political science, history and theater studies, academics have wrestled with how exactly to explain, refine and apply it, as well as identify where it is located: in the powerful traits of a leader or in the susceptible minds of a follower or perhaps somewhere between the two, like a magnetic field…

…Weber himself would disagree with the individualized modern understanding of charisma. “He was actually using it in a far more sophisticated way,” he said. “It wasn’t about the power of the individual — it was about the reflection of that power by the audience, about whether they receive it. He saw it as a process of interaction. And he was as fascinated by crowds as he was by individuals.” In Weber’s words: “What is alone important is how the [charismatic] individual is actually regarded by those subject to charismatic authority, by his ‘followers’ or ‘disciples.’ … It is recognition on the part of those subject to authority which is decisive for the validity of charisma.”

The Eurocentric version of how Weber conceptualized charisma is that he took it from Christianity and transformed it into a theory for understanding Western culture and politics. In truth, it was also founded on numerous non-Western spiritual concepts that he’d discovered via the anthropological works of his day. In one of the less-quoted paragraphs of his 1920 book “The Sociology of Religion,” Weber wrote that his nascent formulation of charisma was inspired by mana (Polynesian), maga (Zoroastrian, and from which we get our word magic) and orenda (Native American). “In this moment,” Wright wrote in a research paper exploring this particular passage, “we see our modern political vocabulary taking shape before our eyes.”

Native American beliefs were of particular interest to Weber. On his only visit to America in 1904, he turned down an invitation from Theodore Roosevelt to visit the White House and headed to the Oklahoma plains in search of what remained of Indigenous communities there. Orenda is an Iroquois term for a spiritual energy that flows through everything in varying degrees of potency. Like charisma, possessors of orenda are said to be able to channel it to exert their will. “A shaman,” wrote the Native American scholar J.N.B. Hewitt, “is one whose orenda is great.” But unlike the Western use of charisma, orenda was said to be accessible to everything, animate and inanimate, from humans to animals and trees to stones. Even the weather could be said to have orenda. “A brewing storm,” wrote Hewitt, is said to be “preparing its orenda.” 

This diffuse element of orenda — the idea that it could be imbued in anything at all — has prefigured a more recent evolution in the Western conceptualization of charisma: that it is more than human. Archaeologists have begun to apply it to the powerful and active social role that certain objects have played throughout history. In environmentalism, Jamie Lorimer of Oxford University has written that charismatic species like lions and elephants “dominate the mediascapes that frame popular sensibilities toward wildlife” and feature “disproportionately in the databases and designations that perform conservation.” 

Compelling explorations of nonhuman charisma have also come from research on modern technology. Human relationships with technology have always been implicitly spiritual. In the 18th century, clockmakers became a metaphor for God and clockwork for the universe. Airplanes were described as “winged gospels.” The original iPhone was heralded, both seriously and mockingly, as “the Jesus phone.” As each new popular technology paints its own vision of a better world, we seek in these objects a sort of redemption, salvation or transcendence. Some deliver miracles, some just appear to, and others fail catastrophically. 

Today, something we view as exciting, terrifying and revolutionary, and have endowed with the ability to know our deepest beliefs, prejudices and desires, is not a populist politician, an internet influencer or a religious leader. It’s an algorithm. 

These technologies now have the power to act in the world, to know things and to make things happen. In many instances, their impact is mundane: They arrange news feeds, suggest clothes to buy and calculate credit scores. But as we interact more and more with them on an increasingly intimate level, in the way we would ordinarily with other humans, we develop the capacity to form charismatic bonds. 

It’s now fairly colloquial for someone to remark that they “feel seen” by algorithms and chatbots. In a 2022 study of people who had formed deep and long-term friendships with the AI-powered program Replika, participants reported that they viewed it as “a part of themselves or as a mirror.” On apps like TikTok, more than any other social media platform, the user experience is almost entirely driven by an intimate relationship with the algorithm. Users are fed a stream of videos not from friends or chosen creators, but mostly from accounts they don’t follow and haven’t interacted with. The algorithm wants users to spend more time on the platform, and so through a series of computational procedures, it draws them down a rabbit hole built from mathematical inferences of their passions and desires. 

The inability to understand quite how sophisticated algorithms exert their will on us (largely because such information is intentionally clouded), while nonetheless perceiving their power enables them to become an authority in our lives. As the psychologist Donald McIntosh explained almost half a century ago, “The outstanding quality of charisma is its enormous power, resting on the intensity and strength of the forces which lie unconscious in every human psyche. … The ability to tap these forces lies behind everything that is creative and constructive in human action, but also behind the terrible destructiveness of which humans are capable. … In the social and political realm, there is no power to match that of the leader who is able to evoke and harness the unconscious resources of his followers.”

In an increasingly complex and divided society, in which partisanship has hindered the prospect of cooperation on everything from human rights to the climate crisis, the thirst for a charismatic leader or artificial intelligence that can move the masses in one direction is as seductive as it has ever been. But whether such a charismatic phenomenon would lead to good or bad, liberation or violence, salvation or destruction, is a conundrum that remains at the core of this two-faced phenomenon. “The false Messiah is as old as the hope for the true Messiah,” wrote Franz Rosenzweig. “He is the changing form of this changeless hope.”… 

How our culture, politics, and technology became infused with a mysterious social phenomenon that everyone can feel but nobody can explain: “The Secret History And Strange Future Of Charisma,” from @joe_zadeh in @NoemaMag. Eminently worth reading in full.

Robert Breault

###

As we muse on magnetism, we might recall that it was on this date in 1723 that Johann Sebastian Bach assumed the office of Thomaskantor (Musical Director of the Thomanerchor, now an internationally-known boys’ choir founded in Leipzig in 1212), presenting his new cantata, Die Elenden sollen essen, BWV 75— a complex work in two parts, of seven movements each, marks the beginning of his first annual cycle of cantatas— in the St. Nicholas Church.

Thomaskirche and it choir school, 1723 (source)

“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…

A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…

In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.

This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:

N14 + N14 ⇒ Mg24 + α + 17.7 MeV

The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?

Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.

Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.

… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.

[Ceglowski unpacks those assumptions…]

If you accept all these premises, what you get is disaster!

Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.

As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.

At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.

From there things get very sci-fi very quickly.

[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]

This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.

People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?

Is the idea of “superintelligence” just a memetic hazard?

When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.

The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.

[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]

The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.

In his book, Bostrom lists six things an AI would have to master to take over the world:

  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity

If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.

Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.

Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.

So what’s the answer? What’s the fix?

We need better scifi! And like so many things, we already have the technology…

[Ceglowski eaxplains– and demostrates– what he means…]

In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.

It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…

In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…

Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.

* John Rich

###

As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…

source

“The limits of my language means the limits of my world”*…

It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…

In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.

But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…

What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?

What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…

There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.

Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.

Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…

Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.

(Image above: source)

* Ludwig Wittgenstein, Tractatus logigo-philosphicus

###

As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.

Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.

source

“If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg”*…

Boris Eldagsen won the creative open category at this year’s Sony World Photography Award with his entry “Pseudomnesia: The Electrician.” He rejected award after revealing that his submission was generated by AI. (source, and more background)

Emerging technology is being used (as ever it has been) to exploit our reflexive assumptions. Victor R. Lee suggests that it’s time to to recalibrate how authenticity is judged…

It turns out that that pop stars Drake and The Weeknd didn’t suddenly drop a new track that went viral on TikTok and YouTube in April 2023. The photograph that won an international photography competition that same month wasn’t a real photograph. And the image of Pope Francis sporting a Balenciaga jacket that appeared in March 2023? That was also a fake.

All were made with the help of generative artifical intelligence, the new technology that can generate humanlike text, audio, and images on demand through programs such as ChatGPT, Midjourney, and Bard, among others.

There’s certainly something unsettling about the ease with which people can be duped by these fakes, and I see it as a harbinger of an authenticity crisis that raises some difficult questions.

How will voters know whether a video of a political candidate saying something offensive was real or generated by AI? Will people be willing to pay artists for their work when AI can create something visually stunning? Why follow certain authors when stories in their writing style will be freely circulating on the internet?

I’ve been seeing the anxiety play out all around me at Stanford University, where I’m a professor and also lead a large generative AI and education initiative.

With text, image, audio, and video all becoming easier for anyone to produce through new generative AI tools, I believe people are going to need to reexamine and recalibrate how authenticity is judged in the first place.

Fortunately, social science offers some guidance.

Long before generative AI and ChatGPT rose to the fore, people had been probing what makes something feel authentic…

Rethinking Authenticity in the Era of Generative AI,” from @VicariousLee in @undarkmag. Eminently worth reading in full.

And to put these issues into a socio-economic context, see Ted Chiang‘s “Will A.I. Become the New McKinsey?” (and closer to the theme of the piece above, his earlier “ChatGPT Is a Blurry JPEG of the Web“).

* Victor R. Lee (in the article linked above)

###

As we ruminate on the real, we might send sentient birthday greetings to Oliver Selfridge; he was born on this date in 1926. A mathematician, he became an early– and seminal– computer scientist: a pioneer in artificial intelligence, and “the father of machine perception.”

Marvin Minsky considered Selfridge to be one of his mentors, and with Selfridge organized the 1956 Dartmouth workshop that is considered the founding event of artificial intelligence as a field. Selfridge wrote important early papers on neural networkspattern recognition, and machine learning; and his “Pandemonium” paper (1959) is generally recognized as a classic in artificial intelligence. In it, Selfridge introduced the notion of “demons” that record events as they occur, recognize patterns in those events, and may trigger subsequent events according to patterns they recognize– which, over time, gave rise to aspect-oriented programming.

source

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

There is a wide range of opinions on AI and what it might portend. While artificial intelligence has its skeptics, and some argue that we should slow its development, AI is here, and it’s only getting warmed up (c.f.: Ezra Klein‘s “This Changes Everything“).

As applications multiply (and get more sophisticated), there’s an understandable concern about its impact on employment. While tools like ChatGPT and DALL·E 2 are roiling the creative sphere, many economists are looking more broadly…

Like many revolutionary technologies before it, AI is likely to eliminate jobs. But, as has been the case in the past, experts argue, AI will likely offset much of that by spurring the creation of new jobs in addition to enhancing many existing jobs. The big question is: what sort of jobs?

“AI will wipe out a lot of current jobs, as has happened with all past technologies,” said Lawrence Katz, a labor economist at Harvard. “But I have no reason to think that AI and robots won’t continue changing the mix of jobs. The question is: will the change in the mix of jobs exacerbate existing inequalities? Will AI raise productivity so much that even as it displaces a lot of jobs, it creates new ones and raises living standards?”

Anu Madgavkar, who leads labor market research at the McKinsey Global Institute, estimates that one in four workers in the US are going to see more AI and technology adopted in their jobs. She said 50-60% of companies say they are pursuing AI-related projects. “So one way or the other people are going to have to learn to work with AI,” Madgavkar said.

While past rounds of automation affected factory jobs most, Madgavkar said that AI will hit white-collar jobs most. “It’s increasingly going into office-based work and customer service and sales,” she said. “They are the job categories that will have the highest rate of automation adoption and the biggest displacement. These workers will have to work with it or move into different skills.”…

US experts warn AI likely to kill off jobs – and widen wealth inequality

But most of these visions are rooted in an appreciation of what AI can currently do (and the likely extensions of those capabilities). What if AI develops in startling, discontinuous ways– what if it exhibits “emergence”?…

… Recent investigations… have revealed that LLMs (large language models) can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes…

The Unpredictable Abilities Emerging From Large AI Models

Perhaps we should be thinking about AI not just functionally, but also philosophically…

The development of Artificial Intelligence is a scientific and engineering project, but it’s also a philosophical one. Lingering debates in the philosophy of mind have the potential to be substantially demystified, if not outright resolved, through the creation of artificial minds that parallel capabilities once thought to be the exclusive province of the human brain.

And since our brain is how we know and interface with the world more generally, understanding how the mind works can shed light on every other corner of philosophy as well, from epistemology to metaethics. My view is thus the exact opposite of Noam Chomsky’s, who argues that the success of Large Language Models is of limited scientific or philosophical import, since such models ultimately reduce to giant inscrutable matrices. On the contrary, the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum — one Chomsky chooses to simply dismiss a priori.

Biological brains differ in important ways from artificial neural networks, but the fact that the latter can emulate the capacities of the former really does contribute to human self-understanding. For one, it represents an independent line of evidence that the brain is indeed computational. But that’s just the tip of the iceberg. The success of LLMs may even help settle longstanding debates on the nature of meaning itself…

We’re all Wittgensteinians now

And maybe we should be careful about “othering” AI (or, for that matter, any of the other forms for intelligence that surround us)…

I don’t think there is such a thing as an artificial intelligence. There are multiple intelligences, many ways of doing intelligence. What I envisage to be more useful and interesting than artificial intelligence as we currently conceive of it—which is this incredibly reduced version of human intelligence— is something more distributed, more widely empowered, and more diverse than singular intelligence would allow for. It’s actually a conversation between multiple intelligences, focused on some narrow goals. I have a new, very long-term, very nascent project I’m calling Server Farm. And the vision of Server Farm is to create a setting in which multiple intelligences could work on a problem together. Those intelligences would be drawn from all different kinds of life. That could include computers, but it could also include fungi and plants and animals in some kind of information-sharing processing arrangement. The point is that it would involve more than one kind of thinking, happening in dialogue and relationship with each other.

James Bridle, “There’s Nothing Unnatural About a Computer

In the end, Tyler Cowan suggests, we should keep developing AI…

…what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?…

We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge…

Existential risk, AI, and the inevitable turn in human history

Still, we’re human, and we would do well, Samuel Arbesman suggests, to use the best of our human “tools”– the humanities– to understand AI…

So go study the concepts of narrative technique and use them to elucidate the behavior of LLMs. Or examine the rhetorical devices that writers and speakers have been using for millennia—and which GPT models has imbibed—and figure out how to use their “physical” principles in relating to these language models.

Ultimately, we need a deeper kind of cultural and humanistic competence, one that doesn’t just vaguely gesture at certain parts of history or specific literary styles. It’s still early days, but we need more of this thinking. To quote Hollis Robbins again: “Nobody yet knows what cultural competence will be in the AI era.” But we must begin to work this out.

AI, Semiotic Physics, and the Opcodes of Story World

All of which is to suggest that we are faced with a future that may well contain currently-unimaginable capabilities, that can accrue as threats or (and) as opportunities. So, as the estimable Jaron Lanier reminds us, we need to remain centered…

“From my perspective,” he says, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”…

The way to ensure that we are sufficiently sane to survive is to remember it’s our humanness that makes us unique…

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’

All of the above-sampled pieces are eminently worth reading in full.

Apposite (and offered without comment): Theta Noir

[Image above: source]

* Alan Kay

###

As we ponder progress, we might recall that it was on this date in 1979 that operators failed to notice that a relief valve was stuck open in the primary coolant system of Three Mile Island’s Unit 2 nuclear reactor following an unexpected shutdown. Consequently, enough coolant drained out of the system to allow the core to overheat and partially melt down– the worst commercial nuclear accident in American history.

Three Mile Island Nuclear Power Plant, near Harrisburg, PA