Posts Tagged ‘rhetoric’
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…
Dan Davies took a ride in a silver machine…
A while ago, I was lucky enough to attend a presentation on a Google DeepMind project called “The Habermas Machine”. It’s a really intriguing use of the LLM technology – basically, you take a lot of people who disagree with each other and ask them what they think about an issue. Then you feed their answers into a model, which tries to produce a statement of minimal agreement that all of them might sign up to. They score the extent to which they do agree with it (which trains the model), and explain what it is that they don’t like about the statement. This second round allows the model to come up with another, better version, which also clarifies to the participants what the other side’s reasons are for disagreeing with them.
It’s called “The Habermas Machine” because it’s meant to, loosely speaking, do a similar job to Jurgen Habermas’ “Ideal Speech Environment,” In tests, there seems to be decent evidence that not only is the machine better than a human moderator at coming up with consensus statements, but that the machine-moderated process leads to more convergence of opinions among the actual participants. (I think I might have predicted this; the model obviously has a “flat” affect, and unlike a human being, isn’t always leaking clues from its intonation and body language about what it really thinks of the participants. That might suggest that as LLMs get better at simulating human responses, they might be worse for this purpose!)
There’s really a lot to say and think about this. But it’s Friday [as he wrote this] and I’m a facetious person, so instead I’m going to share the notes I’ve been making ever since seeing the presentation on which other philosophers and social theorists might also benefit from having machines made out of them.
The Giddens Machine – in accordance with the principle of double hermeneutics, it’s the Habermas Machine, but only for reaching agreement on interpretations of Habermas.
The Goffman Machine – after your side lost on the Habermas Machine, it comes along and generates a set of reasons why you shouldn’t feel so bad about that and should come back for another go.
The Bourdieu Machine – you type your views into it, and then it repeats them with slight and subtle adjustments to make you sound more middle class
The Fourcade/Healy Machine – it gives you a score, then makes you do the work of finding out how to change your views so as to increase your score. Finding equilibrium for the machine is your job now.
The Gambetta Machine – instead of finding a consensus, it selects the most awful version of each conflicting view, and then everyone switches to that in order to show how committed they are.
The Austin Machine – instead of telling the machine “I agree with this statement”, you have to tick a box saying “I hereby agree with this statement”.
The Grice Machine – like the Habermas one, but via conversational implicature it aims to create consensus among all the views that you haven’t expressed rather than the ones you have.
The Derrida Machine – everyone keeps asserting the same statements, but the AI brings them into agreement by changing the meaning of the words themselves.
The Crenshaw Machine – in each round the machine finds a new issue to divide up the group in a different way. Equilibrium is reached when everyone realises they’re on their own and need to get along with each other anyway…
A wry exploration of the possibilities of AI: “Fully automated social theory,” from @dsquareddigest.bsky.social
(Image above: source)
* Alan Kay
###
As we delegate discourse, we might recall that it was on this date in 1981 that the first production model of the DeLorean sports car rolled off the assembly line at the Dunmurry factory, located a few miles from Belfast City Centre in Northern Ireland.
“The pure and simple truth is rarely pure and never simple”*…
An all-too-timely 2016 piece from philosophy professors Scott Aikin and Robert Talisse…
So much Political commentary seems to proceed by means of debate rather than report. This is an understandable consequence of new technology which makes engagement easy. Our heightened exposure to debate is a good thing, too. Open debate is democracy’s lifeblood. Yet popular political disagreement has taken on an odd hue. Rather than presenting facts and professing a view, commentators present views concerning the views of their opponents. And often, it’s not only views about opponents’ views, many go straight to views about opponents. Despite heated disagreements over Big Questions like healthcare, stem-cell research, abortion, same-sex marriage, race relations and global warming, we find a surprising consensus about the nature of political disagreement itself: All agree that, with respect to any Big Question, there is but one intelligent position, and all other positions are not merely wrong, but ignorant, stupid, naïve. And as a consequence, those who cling to these views must be themselves either ignorant or wicked. Or both.
A minute in the Public Affairs section of any bookstore confirms this: Conservatives should talk to liberals “only if they must” because liberalism is a “mental disorder.” Liberals dismiss their Conservative opponents, since they are “lying liars” who use their “noise machine” to promote irrationality.
Both views betray a commitment to the Simple Truth Thesis, the claim that Big Questions always admit of a simple, obvious, and easily-stated solution. The Simple Truth Thesis encourages us to hold that a given truth is so simple and so obvious that only the ignorant, wicked, or benighted could possibly deny it. As our popular political commentary accepts the Simple Truth Thesis, there is a great deal of inflammatory rhetoric and righteous indignation, but in fact very little public debate over the issues that matter most. Consequently, the Big Questions over which we are divided remain unexamined, and our reasons for adopting our different answers are never brought to bear in public discussion.
This brings us back to our original observation – there seems to be so much debate. Yet what passes for public debate is in fact no debate at all. No surprise, really. Debate or discussion concerning a Big Question can be worthwhile only when there is more than one reasonable position regarding the question; and this is precisely what the Simple Truth Thesis denies.
It would be a wonderful world were the Simple Truth Thesis true. Our political task simply would be to empower those who know the simple truth, and rebuke the fools who do not. But the Simple Truth Thesis is not true. In fact, it’s a fairytale—soothing, but ultimately unfit for a serious mind. For any Big Question, there are several defensible positions; it is precisely this feature that makes them big. Of course, to say that a position is defensible is not to say that it’s true. To oppose the Simple Truth Thesis is not to embrace relativism (which is itself a version of the Simple Truth view), nor is it to give up on the idea that there is truth; it is rather to give up on the view that the truth is always simple.
This intellectual distance is difficult because we feel invested in our own Big Answers. But it’s a fantasy to think that the billions of people with whom we disagree have all simply failed to appreciate the facts. This fantasy is easily dissolved once we come to realize that those who reject our own Big Answers often give good reasons for their views and against ours. We might not find ourselves convinced by their reasons, of course, but we can no longer see them as ignorant or foolish.
The lesson to draw is that there is a difference between being stupid and being wrong; the most important truths are often the most difficult to discern, even by the most careful and sincere inquirers. This lesson dismantles the Simple Truth Thesis and leads us to acknowledge that although there may be but one correct answer to each Big Question, there are several defensible views concerning which of the going answers is, indeed, correct. So if the Big Questions matter to us, we should be most eager to hear the reasons of our opponents. We should pursue real disagreement, with real interlocutors, not the cooked-up arguments against caricatured opposition on offer from the political commentary industry.
Democracy is the proposition that a just, peaceful, and morally decent society is possible among equals who disagree over Big Questions. Democracy tries to enable such a society by maintaining the conditions under which citizens could reason together, and, despite ongoing disagreement, come to see each other as reasonable. Citizens who see each other in this way can agree to share in the task of collective self-government despite ongoing and even growing discord over Big Questions. The Simple Truth Thesis repudiates this ideal. Accordingly, as our politics become more argumentative, they become less concerned with actual argument. Yet if we lose our capacity to argue with each other—to confront openly each other’s reasons—we will lose our capacity to see each other as equal partners in self-government, and thus we will lose our democracy…
If only: “The Myth of Simple Truths,” in @3QD.
(Image above: source)
* Oscar Wilde
###
As we dig Diogenes, we might send exciting birthday greetings to Otto Binder; he was born on this date in 1911. An author of science fiction and non-fiction books and stories, and comic books, he is best known as the co-creator of Supergirl and for his many scripts for Captain Marvel Adventures and other stories involving the entire superhero Marvel Family. He is credited with writing over 4,400 stories across a variety of publishers under his own name, as well as more than 160 stories under the pen-name Eando Binder.
Indeed, it was as Eando that he wrote “I, Robot” is a scifi short story , part of a series about a robot named Adam Link, that was published in the January 1939 issue of Amazing Stories. Very innovative for its time, “I, Robot” was one of the first robot stories to break away from Frankenstein clichés. It was reprised in two different comic series, and adapted into episodes of The Outer Limits.
Isaac Asimov— who is famous for his own I, Robot and the series of novels that followed from it, was heavily influenced by the Binder short story. In his introduction to the story in Isaac Asimov Presents the Great SF Stories (1979), Asimov wrote: “It certainly caught my attention. Two months after I read it, I began ‘Robbie’, about a sympathetic robot, and that was the start of my positronic robot series. Eleven years later, when nine of my robot stories were collected into a book, the publisher named the collection I, Robot over my objections. My book is now the more famous, but Otto’s story was there first.”
“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…
A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…
In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.
This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:
N14 + N14 ⇒ Mg24 + α + 17.7 MeV
The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?
Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.
Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.
But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.
At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.
… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.
[Ceglowski unpacks those assumptions…]
If you accept all these premises, what you get is disaster!
Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.
As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.
At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.
From there things get very sci-fi very quickly.
[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]
This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.
…
People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Is the idea of “superintelligence” just a memetic hazard?
When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.
The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.
Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.
But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.
The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.
[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]
The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.
In his book, Bostrom lists six things an AI would have to master to take over the world:
- Intelligence Amplification
- Strategizing
- Social manipulation
- Hacking
- Technology research
- Economic productivity
If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.
Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.
Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.
I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.
So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.
So what’s the answer? What’s the fix?
We need better scifi! And like so many things, we already have the technology…
[Ceglowski eaxplains– and demostrates– what he means…]
In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.
It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.
The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.
And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.
So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.
What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…
In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…
Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.
* John Rich
###
As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…
“The limits of my language means the limits of my world”*…
It seems clear that we are on the verge of an impactful new wave of technology. Venkatesh Rao suggests that it may be a lot more impactful than most of us imagine…
In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.
But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding.
…
Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.
And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours…
What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?
What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?…
There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.
This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.
Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.
…
Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche…
Eminently worth reading in full (along with his earlier piece, linked in the text above): “Life After Language,” from @vgr.
(Image above: source)
* Ludwig Wittgenstein, Tractatus logigo-philosphicus
###
As we ruminate on rhetoric, we might send thoughtful birthday greetings to Bertrand Russell; he was born on this date in 1872. A mathematician, philosopher, logician, and public intellectual, his thinking has had a powerful influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science. and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.
Indeed, Russell was– with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Wittgenstein— a founder of analytic philosophy, one principal focus of which was the philosophy of language.
“Don’t raise your voice, improve your argument”*…
Through reading, champion debater Bo Sen learned that disagreement can be a source of good, not ill, even in our polarized age.
Nowadays, disagreement is out of fashion. It is seen as the root of our personal and political troubles. Debate, in making a sport out of argument, seems at once a trivial pursuit and a serious impediment to the kinds of conversation we want to cultivate. But in my first book, Good Arguments, I propose that the opposite is true. Students may train to win every disagreement, but they soon learn that this is impossible. Even the best lose most of the competitions they attend. What one can do is disagree better—be more convincing and tenacious, and argue in a manner that keeps others willing to come back for another round. In the end, the prize for all that training and effort is a good conversation…
He shares several recommendations, e.g…
Thinking in an Emergency, by Elaine Scarry
Scarry, one of my English professors at Harvard, is the rare scholar who can change how you move through the world. She has made a career of bringing language to the ineffable ends of human experience: pain and beauty. In Thinking in an Emergency, she places deliberation at the core of a democratic response to emergencies including natural disasters and nuclear war. Scarry argues that debate, both real-time and prospective, need not hinder action and can instead secure the resolve and coordination needed for rapid response. She warns against leaders who invoke catastrophes to demand that their populations stop thinking. In this era of calamities, natural and man-made, Scarry’s wisdom is essential: “Whatever happens, keep talking.”
…
The Autobiography of Malcolm X, by Malcolm X and Alex Haley
Malcolm X learned to debate as a 20-something in what was then called Norfolk Prison Colony, a state prison founded on reformist ideals that fielded debate teams against local colleges such as Boston University. In his memoir, X describes the experience of finding one’s voice and communing with an audience as a revelation: “I will tell you that, right there, in the prison, debating, speaking to a crowd, was as exhilarating to me as the discovery of knowledge through reading had been … once my feet got wet, I was gone on debating.” For most people, debate is a pastime of school and university years. This memoir shows that one can make a career and a life from its lessons in fierce, courageous, and resolute disagreement.
…
When Should Law Forgive?, by Martha Minow
One question I struggle with in Good Arguments is when we should stop debating. Minow, a former dean of Harvard Law School, provides here a model of humane consideration on the limits of the adversarial ethic. Hers is an argument for accommodating forgiveness—the “letting go of justified grievances”—in the legal system. She builds the book as one would a spacious house, each area of the law—juvenile justice, debt, amnesties and pardons—a separate chapter in which readers are invited to stay and reflect awhile. Martha Nussbaum is illuminating on related topics in her critique of anger in Anger and Forgiveness, which elicited rebuttal from Myisha Cherry in The Case for Rage, an argument for the emotion’s usefulness in conditions of resistance. The need to balance dispute and conciliation, accountability and grace, cannot be transcended, only better managed.
Seven more recommendations at “The Books That Taught a Debate Champion How to Argue,” from @helloboseo in @TheAtlantic.
* Desmond Tutu
###
As we put the civil back into civil discourse, we might recall that it was on this date in 1966 that the Roman Catholic Church announced, via a notification from the Congregation for the Doctrine of the Faith, the abolition of the Index Librorum Prohibitorum (“index of prohibited books”), which was originally instituted in 1557. The communique stated that, while the Index maintained its moral force, in that it taught Christians to beware, as required by the natural law itself, of those writings that could endanger faith and morality, it no longer had the force of ecclesiastical positive law with the associated penalties. So… read on.











You must be logged in to post a comment.