(Roughly) Daily

Posts Tagged ‘logic

“Two dangers constantly threaten the world: order and disorder”*…

After two days of posts on the state of our civil society, a palette-cleanser: Jordana Cepelewicz with a possibly-consoling reminder…

When he died in 1930 at just 26 years old, Frank Ramsey [see here] had already made transformative contributions to philosophy, economics and mathematics. John Maynard Keynes sought his insights; Ludwig Wittgenstein admired him and considered him a close friend. In his lifetime, Ramsey published only eight pages on pure math: the beginning of a paper about a problem in logic. But in that work, he proved a theorem that ultimately led to a whole new branch of mathematics — what would later be called Ramsey theory.

His theorem stated that if a system is large enough, then no matter how disordered it might be, it’s always bound to exhibit some sort of regular structure. Order inevitably emerges from chaos; patterns are unavoidable. Ramsey theory is the study of when this happens — in sets of numbers, in collections of vertices and edges called graphs, and in other systems. The mathematicians Ronald Graham and Joel Spencer likened it to how you can always pick out patterns among the stars in the night sky…

… In fact, Ramsey theory isn’t just about inevitable patterns found in graphs. Hidden structure emerges in lists of numbers, strings of beads and even card games. In 2019, for example, mathematicians studied collections of sets that can always be arranged to resemble the petals of a sunflower. That same year, Quanta reported on research into sets of numbers that are guaranteed to contain numerical patterns called polynomial progressions. And last year, mathematicians proved a similar result, about sets of integers that must always include three evenly spaced numbers, called arithmetic progressions.

In its hunt for patterns, Ramsey theory gets to the core of what mathematics is all about: finding beauty and order in the most unexpected places…

Finding order in chaos: “Why Complete Disorder Is Mathematically Impossible,” from @jordanacep in @QuantaMagazine.

* Paul Valery

###

As we ponder patterns, we might send paradigm-shaping birthday greetings to a woman who found order and pattern of a different– and world-changing– sort: Rosalind Franklin; she was born on this date in 1920. A biophysicist and X-ray crystallographer, Franklin captured the X-ray diffraction images of DNA that were, in the words of Francis Crick, “the data we actually used” when he and James Watson developed their “double helix” hypothesis for the structure of DNA. Indeed, it was Franklin who argued to Crick and Watson that the backbones of the molecule had to be on the outside (something that neither they nor their competitor in the race to understand DNA, Linus Pauling, had understood).  Franklin never received the recognition she deserved for her independent work– her paper was published in Nature after Crick and Watson’s, which barely mentioned her– and she died of cancer four years before Crick, Watson, and their lab director Maurice Wilkins won the Nobel Prize for the discovery.

source
 

“Simplicity, carried to the extreme, becomes elegance”*…

Jordana Cepelewicz on a very different approach to computing…

In 1936, the British mathematician Alan Turing came up with an idea for a universal computer. It was a simple device: an infinite strip of tape covered in zeros and ones, together with a machine that could move back and forth along the tape, changing zeros to ones and vice versa according to some set of rules. He showed that such a device could be used to perform any computation.

Turing did not intend for his idea to be practical for solving problems. Rather, it offered an invaluable way to explore the nature of computation and its limits. In the decades since that seminal idea, mathematicians have racked up a list of even less practical computing schemes. Games like Minesweeper or Magic: The Gathering could, in principle, be used as general-purpose computers. So could so-called cellular automata like John Conway’s Game of Life, a set of rules for evolving black and white squares on a two-dimensional grid.

In September 2023, Inna Zakharevich of Cornell University and Thomas Hull of Franklin & Marshall College showed that anything that can be computed can be computed by folding paper. They proved that origami is “Turing complete” — meaning that, like a Turing machine, it can solve any tractable computational problem, given enough time…

Read on for more on how folding paper can, in principle, be used to perform any possible computation: “How to Build an Origami Computer” from @jordanacep in @QuantaMagazine.

* Jon Franklin

###

As we contemplate calculation, we might send entropic birthday greeting to Rolf Landauer; he was born on this date in 1927. A physicist, we made important contributions made important contributions in several areas of the thermodynamics of information processing, condensed matter physics, and the conductivity of disordered media… most of which important to the development of computing (of the electronic variety).

He is best known for his discovery and formulation of what’s known as Landauer’s principle: that in any logically irreversible operation that manipulates information, such as erasing a bit of memory, entropy increases and an associated amount of energy is dissipated as heat– a “thermodynamic cost of forgetting,” relevant to chip design (how closely packed elements can be on a chip and still handle the heat), reversible computingquantum information, and quantum computing… but not an issue for origami.) 

source

“A proof tells us where to concentrate our doubts”*…

Andrew Granville at work

Number theorist Andrew Granville on what mathematics really is, on why objectivity is never quite within reach, and on the role that AI might play…

… What is a mathematical proof? We tend to think of it as a revelation of some eternal truth, but perhaps it is better understood as something of a social construct.

Andrew Granville, a mathematician at the University of Montreal, has been thinking about that a lot recently. After being contacted by a philosopher about some of his writing, “I got to thinking about how we arrive at our truths,” he said. “And once you start pushing at that door, you find it’s a vast subject.”

“How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

Quanta spoke with Granville about the nature of mathematical proof — from how proofs work in practice to popular misconceptions about them, to how proof-writing might evolve in the age of artificial intelligence…

[excerpts for that interview follow…]

How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

The culture of mathematics is all about proof. We sit around and think, and 95% of what we do is proof. A lot of the understanding we gain is from struggling with proofs and interpreting the issues that come up when we struggle with them…

The main point of a proof is to persuade the reader of the truth of an assertion. That means verification is key. The best verification system we have in mathematics is that lots of people look at a proof from different perspectives, and it fits well in a context that they know and believe. In some sense, we’re not saying we know it’s true. We’re saying we hope it’s correct, because lots of people have tried it from different perspectives. Proofs are accepted by these community standards.

Then there’s this notion of objectivity — of being sure that what is claimed is right, of feeling like you have an ultimate truth. But how can we know we’re being objective? It’s hard to take yourself out of the context in which you’ve made a statement — to have a perspective outside of the paradigm that has been put in place by society. This is just as true for scientific ideas as it is for anything else…

[Granville runs through a history of the proof, from Aristotle, through Euclid, to Hilbert, then Russel and Whitehead, ending with Gödel…]

To discuss mathematics, you need a language, and a set of rules to follow in that language. In the 1930s, Gödel proved that no matter how you select your language, there are always statements in that language that are true but that can’t be proved from your starting axioms. It’s actually more complicated than that, but still, you have this philosophical dilemma immediately: What is a true statement if you can’t justify it? It’s crazy.

So there’s a big mess. We are limited in what we can do.

Professional mathematicians largely ignore this. We focus on what’s doable. As Peter Sarnak likes to say, “We’re working people.” We get on and try to prove what we can…

[Granville then turns to computers…]

We’ve moved to a different place, where computers can do some wild things. Now people say, oh, we’ve got this computer, it can do things people can’t. But can it? Can it actually do things people can’t? Back in the 1950s, Alan Turing said that a computer is designed to do what humans can do, just faster. Not much has changed.

For decades, mathematicians have been using computers — to make calculations that can help guide their understanding, for instance. What AI can do that’s new is to verify what we believe to be true. Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification.

But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. So I’m not saying that I believe something like Lean is going to make a lot of errors. I’m just not sure it’s any more secure than most things done by humans…

Perhaps it could assist in creating a proof. Maybe in five years’ time, I’ll be saying to an AI model like ChatGPT, “I’m pretty sure I’ve seen this somewhere. Would you check it out?” And it’ll come back with a similar statement that’s correct.

And then once it gets very, very good at that, perhaps you could go one step further and say, “I don’t know how to do this, but is there anybody who’s done something like this?” Perhaps eventually an AI model could find skilled ways to search the literature to bring tools to bear that have been used elsewhere — in a way that a mathematician might not foresee.

However, I don’t understand how ChatGPT can go beyond a certain level to do proofs in a way that outstrips us. ChatGPT and other machine learning programs are not thinking. They are using word associations based on many examples. So it seems unlikely that they will transcend their training data. But if that were to happen, what will mathematicians do? So much of what we do is proof. If you take proofs away from us, I’m not sure who we become…

Eminently worth reading in full: “Why Mathematical Proof Is a Social Compact,” in @QuantaMagazine.

Morris Kline

###

As we add it up, we might send carefully calculated birthday greetings to Edward G. Begle; he was born on this date in 1914. A mathematician who was an accomplished topologist, he is best remembered for his role as the director of the School Mathematics Study Group (SMSG), the primary group credited for developing what came to be known as The New Math (a pedagogical response to Sputnik, taught in American grade schools from the late 1950s through the 1970s)… which will be well-known to (if not necessarily fondly recalled by) readers of a certain age.

source

“A prudent question is one-half of wisdom”*…

Sir Francis Bacon, portrait by Paul van Somer I, 1617

The death of Queen Elizabeth I created a career opportunity for philosopher and statesman Francis Bacon– one that, as Susan Wise Bauer explains– led him to found empiricism, to pioneer inductive reasoning, and in so doing, to advance the scientific method…

In 1603, Francis Bacon, London born, was forty-three years old: a trained lawyer and amateur philosopher, happily married, politically ambitious, perpetually in debt.

He had served Elizabeth I of England loyally at court, without a great deal of recognition in return. But now Elizabeth was dead at the age of sixty-nine, and her crown would go to her first cousin twice removed: James VI of Scotland, James I of England.

Francis Bacon hoped for better things from the new king, but at the moment he had no particular ‘in’ at the English court. Forced to be patient, he began working on a philosophical project he’d had in mind for some years–a study of human knowledge that he intended to call Of the Proficience and Advancement of Learning, Divine and Human.

Like most of Bacon’s undertakings, the project was ridiculously ambitious. He set out to classify all learning into the proper branches and lay out all of the possible impediments to understanding. Part I condemned what he called the three ‘distempers’ of learning, which included ‘vain imaginations,’ pursuits such as astrology and alchemy that had no basis in actual fact; Part II divided all knowledge into three branches and suggested that natural philosophy should occupy the prime spot. Science, the project of understanding the universe, was the most important pursuit man could undertake. The study of history (‘everything that has happened’) and poesy (imaginative writings) took definite second and third places.

For a time, Bacon didn’t expand on these ideas. The Advancement of Learning opened with a fulsome dedication to James I (‘I have been touched–yea, and possessed–with an extreme wonder at those your virtues and faculties . . . the largeness of your capacity, the faithfulness of your memory, the swiftness of your apprehension, the penetration of your judgment, and the facility and order of your elocution …. There hath not been since Christ’s time any king or temporal monarch which hath been so learned in all literature and erudition, divine and human’), and this groveling soon yielded fruit. In 1607 Bacon was appointed as solicitor general, a position he had coveted for years, and over the next decade or so he poured his energies into his government responsibilities.

He did not return to natural philosophy until after his appointment to the even higher post of chancellor in 1618. Now that he had battled his way to the top of the political dirt pile, he announced his intentions to write a work with even greater scope–a new, complete system of philosophy that would shape the minds of men and guide them into new truths. He called this masterwork the Great Instauration: the Great Establishment, a whole new way of thinking, laid out in six parts.

Part I, a survey of the existing ‘ancient arts’ of the mind, repeated the arguments of the Advancement of Learning. But Part II, published in 1620 as a stand-alone work, was something entirely different. It was a wholesale challenge to Aristotelian methods, a brand-new ‘doctrine of a more perfect use of reason.’

Aristotelian thinking relies, heavily, on deductive reasoning for ancient logicians and philosophers, the highest and best road to the truth. Deductive reasoning moves from general statements (premises) to specific conclusions.

MAJOR PREMISE: All heavy matter falls toward the center of the universe. MINOR PREMISE: The earth is made of heavy matter. MINOR PREMISE: The earth is not falling. CONCLUSION: The earth must already be at the center of the universe.

But Bacon had come to believe that deductive reasoning was a dead end that distorted evidence: ‘Having first determined the question according to his will,’ he objected, ‘man then resorts to experience, and bending her to conformity with his placets [expressions of assent], leads her about like a captive in a procession.’ Instead, he argued, the careful thinker must reason the other way around: starting from specifics and building toward general conclusions, beginning with particular pieces of evidence and working, inductively, toward broader assertions.

This new way of thinking–inductive reasoning–had three steps to it. The ‘true method’ Bacon explained,

‘first lights the candle, and then by means of the candle shows the way; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms, and from established axioms again new experiments.’

In other words, the natural philosopher must first come up with an idea about how the world works: ‘lighting the candle.’ Second, he must test the idea against physical reality, against ‘experience duly ordered’–both observations of the world around him and carefully designed experiments. Only then, as a last step, should he ‘deduce axioms,’ coming up with a theory that could be claimed to carry truth. 

Hypothesis, experiment, conclusion: Bacon had just traced the outlines of the scientific method…

Francis Bacon and the Scientific Method

An excerpt from The Story of Western Science by @SusanWiseBauer, via the invaluable @delanceyplace.

* Francis Bacon

###

As we embrace empiricism, we might send carefully-transmitted birthday greetings to Augusto Righi; he was born on this date in 1850. A physicist and a pioneer in the study of electromagnetism, he showed that showed that radio waves displayed characteristics of light wave behavior (reflection, refraction, polarization, and interference), with which they shared the electromagnetic spectrum. In 1894 Righi was the first person to generate microwaves.

Righi influenced the young Guglielmo Marconi, the inventor of radio, who visited him at his lab. Indeed, Marconi invented the first practical wireless telegraphy radio transmitters and receivers in 1894 using Righi’s four ball spark oscillator (from Righi’s microwave work) in his transmitters.

source

“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…

A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…

In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.

This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:

N14 + N14 ⇒ Mg24 + α + 17.7 MeV

The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?

Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.

Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.

… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.

[Ceglowski unpacks those assumptions…]

If you accept all these premises, what you get is disaster!

Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.

As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.

At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.

From there things get very sci-fi very quickly.

[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]

This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.

People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?

Is the idea of “superintelligence” just a memetic hazard?

When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.

The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.

[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]

The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.

In his book, Bostrom lists six things an AI would have to master to take over the world:

  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity

If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.

Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.

Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.

So what’s the answer? What’s the fix?

We need better scifi! And like so many things, we already have the technology…

[Ceglowski eaxplains– and demostrates– what he means…]

In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.

It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…

In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…

Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.

* John Rich

###

As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…

source