(Roughly) Daily

Posts Tagged ‘Mathematics

“Mathematics has not a foot to stand on which is not purely metaphysical”*…

Battle of Maida 1806, part of the the invasion and occupation of Naples by Napoleon’s French Empire (source)

Lest we forget…

A forgotten episode in French-occupied Naples in the years around 1800—just after the French Revolution—illustrates why it makes sense to see mathematics and politics as entangled. The protagonists of this story were gravely concerned about how mainstream mathematical methods were transforming their world—somewhat akin to our current-day concerns about how digital algorithms are transforming ours. But a key difference was their straightforward moral and political reading of those mathematical methods. By contrast, in our own era we seem to think that mathematics offers entirely neutral tools for ordering and reordering the world—we have, in other words, forgotten something that was obvious to them.

In this essay, I’ll use the case of revolutionary Naples to argue that the rise of a new and allegedly neutral mathematics—characterized by rigor and voluntary restriction—was a mathematical response to pressing political problems. Specifically, it was a response to the question of how to stabilize social order after the turbulence of the French Revolution. Mathematics, I argue, provided the logical infrastructure for the return to order. This episode, then, shows how and why mathematical concepts and methods are anything but timeless or neutral; they define what “reason” is, and what it is not, and thus the concrete possibilities of political action. The technical and political are two sides of the same coin—and changes in notions like mathematical rigor, provability, and necessity simultaneously constitute changes in our political imagination…

Massimo Mazzotti with an adaptation from his new book, Reactionary Mathematics: A Genealogy of Purity: “Foundational Anxieties, Modern Mathematics, and the Political Imagination,” @maxmazzotti in @LAReviewofBooks.

* Thomas De Quincey

###

As we count on it, we might send carefully-calculated birthday greetings to Regiomontanus (or Johannes Müller von Königsberg, as he was christened); he was born on this date in 1436. A mathematician, astrologer, and astronomer of the German Renaissance, he and his work were instrumental in the development of Copernican heliocentrism during his lifetime and in the decades following his death.

source

“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…

A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…

In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.

This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:

N14 + N14 ⇒ Mg24 + α + 17.7 MeV

The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?

Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.

Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.

… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.

[Ceglowski unpacks those assumptions…]

If you accept all these premises, what you get is disaster!

Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.

As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.

At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.

From there things get very sci-fi very quickly.

[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]

This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.

People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?

Is the idea of “superintelligence” just a memetic hazard?

When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.

The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.

[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]

The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.

In his book, Bostrom lists six things an AI would have to master to take over the world:

  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity

If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.

Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.

Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.

So what’s the answer? What’s the fix?

We need better scifi! And like so many things, we already have the technology…

[Ceglowski eaxplains– and demostrates– what he means…]

In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.

It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…

In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…

Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.

* John Rich

###

As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…

source

“Machines take me by surprise with great frequency”*…

In search of universals in the 17th century, Gottfried Leibniz imagined the calculus ratiocinator, a theoretical logical calculation framework aimed at universal application, that led Norbert Wiener suggested that Leibniz should be considered the patron saint of cybernetics. In the 19th century, Charles Babbage and Ada Lovelace took a pair of whacks at making it real.

Ironically, it was confronting the impossibility of a universal calculator that led to modern computing. In 1936 (the same year that Charlie Chaplin released Modern Times) Alan Turing (following on Godel’s demonstration that mathematics is incomplete and addressing Hilbert‘s “decision problem,” querying the limits of computation) published the (notional) design of a “machine” that elegantly demonstrated those limits– and, as Sheon Han explains, birthed computing as we know it…

… [Hilbert’s] question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.

The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It’s abstract because it doesn’t (and can’t) physically exist as a tangible device. Instead, it’s a conceptual model of computation: If the machine can calculate a function, then the function is computable.

With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process — a set of instructions, which today we’d call an algorithm — that can always determine whether a given statement is true?…

… in 1936, Church and Turing — using different methods — independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway’s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.

Beyond answering these fundamental questions, Turing’s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today’s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture — called the von Neumann architecture — that made the universal Turing machine concept possible in a real-life machine…

As Turing said, “if a machine is expected to be infallible, it cannot also be intelligent.” On the importance of thought experiments: “The Most Important Machine That Was Never Built,” from @sheonhan in @QuantaMagazine.

* Alan Turing

###

As we sum it up, we might spare a thought for Martin Gardner; he died on this date in 2010.  Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

 source

Written by (Roughly) Daily

May 22, 2023 at 1:00 am

“There is only one good, knowledge, and one evil, ignorance”*…

The School of Athens (1509–1511) by Raphael, depicting famous classical Greek philosophers (source)

If only it were that simple. Trevor Klee unpacks the travails of Galileo to illustrate the way that abstractions become practical “knowledge”…

… We’re all generally looking for the newest study, or the most up-to-date review. At the very least, we certainly aren’t looking through ancient texts for scientific truths.

This might seem obvious to you. Of course you’d never look at an old paper. That old paper was probably done with worse instruments and worse methods. Just because something’s old or was written by someone whose name you recognize doesn’t mean that it’s truthful.

But why is it obvious to you? Because you live in a world that philosophy built. The standards for truth that you imbibed as a child are not natural standards of truth. If you had been an educated person in 1200s Europe, your standard for truth would have been what has stood the test of time. You would have lived among the ruins of Rome and studied the anatomy texts of the Greek, known that your society could produce neither of those, and concluded that they knew something that your society could not. Your best hope would then be to simply copy them as best as possible.

This was less true by the time Galileo was alive. This is why an educated man like Galileo would have entertained the idea that he knew better than the ancient Greeks, and why his ideas found some purchase among his fellow academicians (including the then Pope, actually). But still, there was a prominent train of thought that promoted the idea that a citation from Aristotle was worth more than a direct observation from a telescope.

But you live in a different world now. You live in a world in which the science of tomorrow is better than the science of today, and our societal capabilities advance every year. We can build everything the ancients did and stuff they never even imagined possible. So you respect tradition less, and respect what is actually measured most accurately in the physical world more.

Today, this battle over truth is so far in the past that we don’t even know it was ever a battle. The closest we come to this line of reasoning is when new age medicine appeals to “ancient wisdom”, but even they feel compelled to quote studies. Even more modern battles are mostly settled, like the importance of randomized, double-blinded controlled studies over non-randomized, non-controlled studies.

The reason we mark battles is not just for fun or historical curiosity. It’s to remind us that what we take for granted was actually fought for by generations before us. And, it’s to make sure that we know the importance of teaching these lessons so thoroughly that future generations take them for granted as well. A world in which nobody would dream of established theory overturning actual empirical evidence is a better world than the one that Galileo lived in…

On the importance of understanding the roots of our understanding: “You live in a world that philosophy built,” from @trevor_klee via @ByrneHobart.

Apposite (in an amusing way): “Going Against The Grain Weevils,” on Aristotle’s Generation of Animals and household pests.

* Socrates, from Diogenes Laertius, Lives and Opinions of Eminent Philosophers (probably early third century BCE)

###

As we examine epistemology, we might send elegantly phrased and eclectic birthday greetings to Persian polymath Omar Khayyam; the philosopher, mathematician, astronomer, epigrammatist, and poet was born on this date in 1048. While he’s probably best known to English-speakers as a poet, via Edward FitzGerald’s famous translation of (what he called) the Rubaiyat of Omar Khayyam, Fitzgerald’s attribution of the book’s poetry to Omar (as opposed to the aphorisms and other quotes in the volume) is now questionable to many scholars (who believe those verses to be by several different Persian authors).

In any case, Omar was unquestionably one of the major philosophers, mathematicians and astronomers of the medieval period.  He is the author of one of the most important treatises on algebra written before modern times, the Treatise on Demonstration of Problems of Algebra, which includes a geometric method for solving cubic equations by intersecting a hyperbola with a circle.  His astronomical observations contributed to the reform of the Persian calendar.  And he made important contributions to mechanics, geography, mineralogy, music, climatology and Islamic theology.

 source

Written by (Roughly) Daily

May 15, 2023 at 1:00 am

“To Infinity and Beyond!”*…

The idea of infinity is probably about as old as numbers themselves, going back to whenever people first realized that they could keep counting forever. But even though we have a sign for infinity and can refer to the concept in casual conversation, infinity remains profoundly mysterious, even to mathematicians. Steven Strogatz explores that mystery with Justin Moore

No one really knows where the idea of infinity came from, but it must be very ancient — as old as people’s hopes and fears about things that could conceivably go on forever. Some of them are scary, like bottomless pits, and some of them are uplifting, like endless love. Within mathematics, the idea of infinity is probably about as old as numbers themselves. Once people realized that they could just keep on counting forever — 1, 2, 3 and so on. But even though infinity is a very old idea, it remains profoundly mysterious. People have been scratching their heads about infinity for thousands of years now, at least since Zeno and Aristotle in ancient Greece.

But how do mathematicians make sense of infinity today? Are there different sizes of infinity? Is infinity useful to mathematicians? And if so, how exactly? And what does all this have to do with the foundations of mathematics itself?…

All infinities go on forever, so “How Can Some Infinities Be Bigger Than Others?“, from @stevenstrogatz in @QuantaMagazine.

See also: Alan Lightman‘s “Why the paradoxes of infinity still puzzle us today” (source of the image above).

* Buzz Lightyear

###

As we envision endlessness, we might send carefully-calculated birthday greetings to Gaspard Monge; he was born on this date in 1746. A mathematician, he is considered the inventor of descriptive geometry, (the mathematical basis of technical drawing), and the father of differential geometry (the study of smooth shapes and spaces, AKA smooth manifolds).

During the French Revolution he was involved in the reform of the French educational system, most notably as the lead founder of the École Polytechnique.

source

%d bloggers like this: