(Roughly) Daily

Posts Tagged ‘rhetoric

“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…

A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…

In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.

This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:

N14 + N14 ⇒ Mg24 + α + 17.7 MeV

The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?

Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.

Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.

… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.

[Ceglowski unpacks those assumptions…]

If you accept all these premises, what you get is disaster!

Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.

As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.

At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.

From there things get very sci-fi very quickly.

[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]

This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.

People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?

Is the idea of “superintelligence” just a memetic hazard?

When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.

The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.

[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]

The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.

In his book, Bostrom lists six things an AI would have to master to take over the world:

  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity

If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.

Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.

Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.

So what’s the answer? What’s the fix?

We need better scifi! And like so many things, we already have the technology…

[Ceglowski eaxplains– and demostrates– what he means…]

In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.

It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…

In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…

Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.

* John Rich


As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…


“The limits of my language means the limits of my world”*…

It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…

In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.

But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…

What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?

What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…

There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.

Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.

Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…

Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.

(Image above: source)

* Ludwig Wittgenstein, Tractatus logigo-philosphicus


As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.

Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.


“Don’t raise your voice, improve your argument”*…

Through reading, champion debater Bo Sen learned that disagreement can be a source of good, not ill, even in our polarized age.

Nowadays, disagreement is out of fashion. It is seen as the root of our personal and political troubles. Debate, in making a sport out of argument, seems at once a trivial pursuit and a serious impediment to the kinds of conversation we want to cultivate. But in my first book, Good Arguments, I propose that the opposite is true. Students may train to win every disagreement, but they soon learn that this is impossible. Even the best lose most of the competitions they attend. What one can do is disagree better—be more convincing and tenacious, and argue in a manner that keeps others willing to come back for another round. In the end, the prize for all that training and effort is a good conversation…

He shares several recommendations, e.g…

Thinking in an Emergency, by Elaine Scarry

Scarry, one of my English professors at Harvard, is the rare scholar who can change how you move through the world. She has made a career of bringing language to the ineffable ends of human experience: pain and beauty. In Thinking in an Emergency, she places deliberation at the core of a democratic response to emergencies including natural disasters and nuclear war. Scarry argues that debate, both real-time and prospective, need not hinder action and can instead secure the resolve and coordination needed for rapid response. She warns against leaders who invoke catastrophes to demand that their populations stop thinking. In this era of calamities, natural and man-made, Scarry’s wisdom is essential: “Whatever happens, keep talking.”

The Autobiography of Malcolm X, by Malcolm X and Alex Haley

Malcolm X learned to debate as a 20-something in what was then called Norfolk Prison Colony, a state prison founded on reformist ideals that fielded debate teams against local colleges such as Boston University. In his memoir, X describes the experience of finding one’s voice and communing with an audience as a revelation: “I will tell you that, right there, in the prison, debating, speaking to a crowd, was as exhilarating to me as the discovery of knowledge through reading had been … once my feet got wet, I was gone on debating.” For most people, debate is a pastime of school and university years. This memoir shows that one can make a career and a life from its lessons in fierce, courageous, and resolute disagreement.

When Should Law Forgive?, by Martha Minow

One question I struggle with in Good Arguments is when we should stop debating. Minow, a former dean of Harvard Law School, provides here a model of humane consideration on the limits of the adversarial ethic. Hers is an argument for accommodating forgiveness—the “letting go of justified grievances”—in the legal system. She builds the book as one would a spacious house, each area of the law—juvenile justice, debt, amnesties and pardons—a separate chapter in which readers are invited to stay and reflect awhile. Martha Nussbaum is illuminating on related topics in her critique of anger in Anger and Forgiveness, which elicited rebuttal from Myisha Cherry in The Case for Rage, an argument for the emotion’s usefulness in conditions of resistance. The need to balance dispute and conciliation, accountability and grace, cannot be transcended, only better managed.

Seven more recommendations at “The Books That Taught a Debate Champion How to Argue,” from @helloboseo in @TheAtlantic.

* Desmond Tutu


As we put the civil back into civil discourse, we might recall that it was on this date in 1966 that the Roman Catholic Church announced, via a notification from the Congregation for the Doctrine of the Faith, the abolition of the Index Librorum Prohibitorum (“index of prohibited books”), which was originally instituted in 1557. The communique stated that, while the Index maintained its moral force, in that it taught Christians to beware, as required by the natural law itself, of those writings that could endanger faith and morality, it no longer had the force of ecclesiastical positive law with the associated penalties. So… read on.

Title page of Index Librorum Prohibitorum (Venice 1564)


“Based on his liberal use of the semicolon, I just assumed this date would go well”*…

Mary Norris (“The Comma Queen“) appreciates Cecelia Watson‘s appreciation of a much-maligned mark, Semicolon

… Watson, a historian and philosopher of science and a teacher of writing and the humanities—in other words, a Renaissance woman—gives us a deceptively playful-looking book that turns out to be a scholarly treatise on a sophisticated device that has contributed eloquence and mystery to Western civilization.

The semicolon itself was a Renaissance invention. It first appeared in 1494, in a book published in Venice by Aldus Manutius. “De Aetna,” Watson explains, was “an essay, written in dialogue form,” about climbing Mt. Etna. Its author, Pietro Bembo, is best known today not for his book but for the typeface, designed by Francesco Griffo, in which the first semicolon was displayed: Bembo. The mark was a hybrid between a comma and a colon, and its purpose was to prolong a pause or create a more distinct separation between parts of a sentence. In her delightful history, Watson brings the Bembo semicolon alive, describing “its comma-half tensely coiled, tail thorn-sharp beneath the perfect orb thrown high above it.” Designers, she explains, have since given the mark a “relaxed and fuzzy” look (Poliphilus), rendered it “aggressive” (Garamond), and otherwise adapted it for the modern age: “Palatino’s is a thin flapper in a big hat slouched against the wall at a party.”

The problem with the semicolon is not how it looks but what it does and how that has changed over time. In the old days, punctuation simply indicated a pause. Comma, colon: semicolon; period. Eventually, grammarians and copy editors came along and made themselves indispensable by punctuating (“pointing”) a writer’s prose “to delineate clauses properly, such that punctuation served syntax.” That is, commas, semicolons, and colons were plugged into a sentence in order to highlight, subordinate, or otherwise conduct its elements, connecting them syntactically. One of the rules is that, unless you are composing a list, a semicolon is supposed to be followed by a complete clause, capable of standing on its own. The semicolon can take the place of a conjunction, like “and” or “but,” but it should not be used in addition to it. This is what got circled in red in my attempts at scholarly criticism in graduate school. Sentence length has something to do with it—a long, complex sentence may benefit from a clarifying semicolon—but if a sentence scans without a semicolon it’s best to leave it alone.

Watson has been keeping an eye out for effective semicolons for years. She calculates that there are four-thousand-odd semicolons in “Moby-Dick,” or “one for every 52 words.” Clumsy as nineteenth-century punctuation may seem to a modern reader, Melville’s semicolons, she writes, act like “sturdy little nails,” holding his wide-ranging narrative together….

Eminently worth reading in full: “Sympathy for the Semicolon,” on @ceceliawatson from @MaryNorrisTNY.

Sort of apposite (and completely entertaining/enlightening): “Naming the Unnamed: On the Many Uses of the Letter X.”

(Image above: source)

* Raven Leilani, Luster


As we punctuate punctiliously, we might recall that it was on this date in 1990 that CBS aired the final episode of Bob Newhart’s second successful sitcom series, Newhart, in which he co-starred with Mary Fran through a 184 episode run that had started in 1982. Newhart had, of course, had a huge hit with his first series, The Bob Newhart Show, in which he co-starred with Suzanne Pleshette.

Newhart‘s ending, its final scene, is often cited as the best finale in sit-com history.

“In the sphere of thought, absurdity and perversity remain the masters of this world, and their dominion is suspended only for brief periods”*…

From a (somewhat sarcastic) 1896 essay (“The Art of Controversy”) by that gloomiest of philosophers, Arthur Schopenhauer, advice that (sadly) feels as appropriate today as it surely was then…

1. Carry your opponent’s proposition beyond its natural limits; exaggerate it. The more general your opponent’s statement becomes, the more objections you can find against it. The more restricted and narrow his or her propositions remain, the easier they are to defend by him or her.

2. Use different meanings of your opponent’s words to refute his or her argument.

3. Ignore your opponent’s proposition, which was intended to refer to a particular thing. Rather, understand it in some quite different sense, and then refute it. Attack something different than that which was asserted.

The first three of “Schopenhauer’s 38 Stratagems, or 38 Ways to Win an Argument.” Via @TheBrowser.

[Image above: source]

* Arthur Schopenhauer, “The Art of Controversy


As we celebrate sophistry, we might recall that it was on this date (or near; scholars disagree) in 325 that Roman Emperor Constantine I convened a gathering in which all of Scopenhauer’s tricks were surely employed: the First Council of Nicaea. An ecumenical council, it was the first effort to attain consensus in the church through an assembly representing all Christendom. Its main accomplishments were settlement of the Christological issue of the divine nature of God the Son and his relationship to God the Father, the construction of the first part of the Nicene Creed, mandating uniform observance of the date of Easter, and the promulgation of early canon law.

Icon depicting the Emperor Constantine and the bishops of the First Council of Nicaea holding the Nicene Creed


%d bloggers like this: