Posts Tagged ‘artificial intelligence’
“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…
A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…
In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.
This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:
N14 + N14 ⇒ Mg24 + α + 17.7 MeV
The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?
Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.
Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.
But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.
At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.
… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.
[Ceglowski unpacks those assumptions…]
If you accept all these premises, what you get is disaster!
Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.
As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.
At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.
From there things get very sci-fi very quickly.
[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]
This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.
…
People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Is the idea of “superintelligence” just a memetic hazard?
When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.
The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.
Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.
But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.
The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.
[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]
The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.
In his book, Bostrom lists six things an AI would have to master to take over the world:
- Intelligence Amplification
- Strategizing
- Social manipulation
- Hacking
- Technology research
- Economic productivity
If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.
Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.
Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.
I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.
So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.
So what’s the answer? What’s the fix?
We need better scifi! And like so many things, we already have the technology…
[Ceglowski eaxplains– and demostrates– what he means…]
In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.
It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.
The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.
And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.
So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.
What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…
In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…
Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.
* John Rich
###
As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…
“The limits of my language means the limits of my world”*…
It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…
In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.
But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.
…
Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.
And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…
What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?
What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…
There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.
This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.
Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.
…
Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…
Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.
(Image above: source)
* Ludwig Wittgenstein, Tractatus logigo-philosphicus
###
As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.
Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…
There is a wide range of opinions on AI and what it might portend. While artificial intelligence has its skeptics, and some argue that we should slow its development, AI is here, and it’s only getting warmed up (c.f.: Ezra Klein‘s “This Changes Everything“).
As applications multiply (and get more sophisticated), there’s an understandable concern about its impact on employment. While tools like ChatGPT and DALL·E 2 are roiling the creative sphere, many economists are looking more broadly…
Like many revolutionary technologies before it, AI is likely to eliminate jobs. But, as has been the case in the past, experts argue, AI will likely offset much of that by spurring the creation of new jobs in addition to enhancing many existing jobs. The big question is: what sort of jobs?
“AI will wipe out a lot of current jobs, as has happened with all past technologies,” said Lawrence Katz, a labor economist at Harvard. “But I have no reason to think that AI and robots won’t continue changing the mix of jobs. The question is: will the change in the mix of jobs exacerbate existing inequalities? Will AI raise productivity so much that even as it displaces a lot of jobs, it creates new ones and raises living standards?”
Anu Madgavkar, who leads labor market research at the McKinsey Global Institute, estimates that one in four workers in the US are going to see more AI and technology adopted in their jobs. She said 50-60% of companies say they are pursuing AI-related projects. “So one way or the other people are going to have to learn to work with AI,” Madgavkar said.
While past rounds of automation affected factory jobs most, Madgavkar said that AI will hit white-collar jobs most. “It’s increasingly going into office-based work and customer service and sales,” she said. “They are the job categories that will have the highest rate of automation adoption and the biggest displacement. These workers will have to work with it or move into different skills.”…
“US experts warn AI likely to kill off jobs – and widen wealth inequality“
But most of these visions are rooted in an appreciation of what AI can currently do (and the likely extensions of those capabilities). What if AI develops in startling, discontinuous ways– what if it exhibits “emergence”?…
… Recent investigations… have revealed that LLMs (large language models) can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes…
“The Unpredictable Abilities Emerging From Large AI Models“
Perhaps we should be thinking about AI not just functionally, but also philosophically…
The development of Artificial Intelligence is a scientific and engineering project, but it’s also a philosophical one. Lingering debates in the philosophy of mind have the potential to be substantially demystified, if not outright resolved, through the creation of artificial minds that parallel capabilities once thought to be the exclusive province of the human brain.
And since our brain is how we know and interface with the world more generally, understanding how the mind works can shed light on every other corner of philosophy as well, from epistemology to metaethics. My view is thus the exact opposite of Noam Chomsky’s, who argues that the success of Large Language Models is of limited scientific or philosophical import, since such models ultimately reduce to giant inscrutable matrices. On the contrary, the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum — one Chomsky chooses to simply dismiss a priori.
Biological brains differ in important ways from artificial neural networks, but the fact that the latter can emulate the capacities of the former really does contribute to human self-understanding. For one, it represents an independent line of evidence that the brain is indeed computational. But that’s just the tip of the iceberg. The success of LLMs may even help settle longstanding debates on the nature of meaning itself…
“We’re all Wittgensteinians now“
And maybe we should be careful about “othering” AI (or, for that matter, any of the other forms for intelligence that surround us)…
I don’t think there is such a thing as an artificial intelligence. There are multiple intelligences, many ways of doing intelligence. What I envisage to be more useful and interesting than artificial intelligence as we currently conceive of it—which is this incredibly reduced version of human intelligence— is something more distributed, more widely empowered, and more diverse than singular intelligence would allow for. It’s actually a conversation between multiple intelligences, focused on some narrow goals. I have a new, very long-term, very nascent project I’m calling Server Farm. And the vision of Server Farm is to create a setting in which multiple intelligences could work on a problem together. Those intelligences would be drawn from all different kinds of life. That could include computers, but it could also include fungi and plants and animals in some kind of information-sharing processing arrangement. The point is that it would involve more than one kind of thinking, happening in dialogue and relationship with each other.
James Bridle, “There’s Nothing Unnatural About a Computer“
In the end, Tyler Cowan suggests, we should keep developing AI…
…what kind of civilization is it that turns away from the challenge of dealing with more…intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI? Do you really want to press the button, giving us that kind of American civilization?…
We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge…
“Existential risk, AI, and the inevitable turn in human history“
Still, we’re human, and we would do well, Samuel Arbesman suggests, to use the best of our human “tools”– the humanities– to understand AI…
So go study the concepts of narrative technique and use them to elucidate the behavior of LLMs. Or examine the rhetorical devices that writers and speakers have been using for millennia—and which GPT models has imbibed—and figure out how to use their “physical” principles in relating to these language models.
Ultimately, we need a deeper kind of cultural and humanistic competence, one that doesn’t just vaguely gesture at certain parts of history or specific literary styles. It’s still early days, but we need more of this thinking. To quote Hollis Robbins again: “Nobody yet knows what cultural competence will be in the AI era.” But we must begin to work this out.
“AI, Semiotic Physics, and the Opcodes of Story World“
All of which is to suggest that we are faced with a future that may well contain currently-unimaginable capabilities, that can accrue as threats or (and) as opportunities. So, as the estimable Jaron Lanier reminds us, we need to remain centered…
“From my perspective,” he says, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”…
The way to ensure that we are sufficiently sane to survive is to remember it’s our humanness that makes us unique…
“Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’“
All of the above-sampled pieces are eminently worth reading in full.
Apposite (and offered without comment): Theta Noir
[Image above: source]
* Alan Kay
###
As we ponder progress, we might recall that it was on this date in 1979 that operators failed to notice that a relief valve was stuck open in the primary coolant system of Three Mile Island’s Unit 2 nuclear reactor following an unexpected shutdown. Consequently, enough coolant drained out of the system to allow the core to overheat and partially melt down– the worst commercial nuclear accident in American history.










You must be logged in to post a comment.