Posts Tagged ‘logic’
“Two dangers constantly threaten the world: order and disorder”*…
After two days of posts on the state of our civil society, a palette-cleanser: Jordana Cepelewicz with a possibly-consoling reminder…
When he died in 1930 at just 26 years old, Frank Ramsey [see here] had already made transformative contributions to philosophy, economics and mathematics. John Maynard Keynes sought his insights; Ludwig Wittgenstein admired him and considered him a close friend. In his lifetime, Ramsey published only eight pages on pure math: the beginning of a paper about a problem in logic. But in that work, he proved a theorem that ultimately led to a whole new branch of mathematics — what would later be called Ramsey theory.
His theorem stated that if a system is large enough, then no matter how disordered it might be, it’s always bound to exhibit some sort of regular structure. Order inevitably emerges from chaos; patterns are unavoidable. Ramsey theory is the study of when this happens — in sets of numbers, in collections of vertices and edges called graphs, and in other systems. The mathematicians Ronald Graham and Joel Spencer likened it to how you can always pick out patterns among the stars in the night sky…
…
… In fact, Ramsey theory isn’t just about inevitable patterns found in graphs. Hidden structure emerges in lists of numbers, strings of beads and even card games. In 2019, for example, mathematicians studied collections of sets that can always be arranged to resemble the petals of a sunflower. That same year, Quanta reported on research into sets of numbers that are guaranteed to contain numerical patterns called polynomial progressions. And last year, mathematicians proved a similar result, about sets of integers that must always include three evenly spaced numbers, called arithmetic progressions.
In its hunt for patterns, Ramsey theory gets to the core of what mathematics is all about: finding beauty and order in the most unexpected places…
Finding order in chaos: “Why Complete Disorder Is Mathematically Impossible,” from @jordanacep in @QuantaMagazine.
* Paul Valery
###
As we ponder patterns, we might send paradigm-shaping birthday greetings to a woman who found order and pattern of a different– and world-changing– sort: Rosalind Franklin; she was born on this date in 1920. A biophysicist and X-ray crystallographer, Franklin captured the X-ray diffraction images of DNA that were, in the words of Francis Crick, “the data we actually used” when he and James Watson developed their “double helix” hypothesis for the structure of DNA. Indeed, it was Franklin who argued to Crick and Watson that the backbones of the molecule had to be on the outside (something that neither they nor their competitor in the race to understand DNA, Linus Pauling, had understood). Franklin never received the recognition she deserved for her independent work– her paper was published in Nature after Crick and Watson’s, which barely mentioned her– and she died of cancer four years before Crick, Watson, and their lab director Maurice Wilkins won the Nobel Prize for the discovery.

“Simplicity, carried to the extreme, becomes elegance”*…
Jordana Cepelewicz on a very different approach to computing…
In 1936, the British mathematician Alan Turing came up with an idea for a universal computer. It was a simple device: an infinite strip of tape covered in zeros and ones, together with a machine that could move back and forth along the tape, changing zeros to ones and vice versa according to some set of rules. He showed that such a device could be used to perform any computation.
Turing did not intend for his idea to be practical for solving problems. Rather, it offered an invaluable way to explore the nature of computation and its limits. In the decades since that seminal idea, mathematicians have racked up a list of even less practical computing schemes. Games like Minesweeper or Magic: The Gathering could, in principle, be used as general-purpose computers. So could so-called cellular automata like John Conway’s Game of Life, a set of rules for evolving black and white squares on a two-dimensional grid.
In September 2023, Inna Zakharevich of Cornell University and Thomas Hull of Franklin & Marshall College showed that anything that can be computed can be computed by folding paper. They proved that origami is “Turing complete” — meaning that, like a Turing machine, it can solve any tractable computational problem, given enough time…
Read on for more on how folding paper can, in principle, be used to perform any possible computation: “How to Build an Origami Computer” from @jordanacep in @QuantaMagazine.
* Jon Franklin
###
As we contemplate calculation, we might send entropic birthday greeting to Rolf Landauer; he was born on this date in 1927. A physicist, we made important contributions made important contributions in several areas of the thermodynamics of information processing, condensed matter physics, and the conductivity of disordered media… most of which important to the development of computing (of the electronic variety).
He is best known for his discovery and formulation of what’s known as Landauer’s principle: that in any logically irreversible operation that manipulates information, such as erasing a bit of memory, entropy increases and an associated amount of energy is dissipated as heat– a “thermodynamic cost of forgetting,” relevant to chip design (how closely packed elements can be on a chip and still handle the heat), reversible computing, quantum information, and quantum computing… but not an issue for origami.)
“A prudent question is one-half of wisdom”*…
The death of Queen Elizabeth I created a career opportunity for philosopher and statesman Francis Bacon– one that, as Susan Wise Bauer explains– led him to found empiricism, to pioneer inductive reasoning, and in so doing, to advance the scientific method…
In 1603, Francis Bacon, London born, was forty-three years old: a trained lawyer and amateur philosopher, happily married, politically ambitious, perpetually in debt.
He had served Elizabeth I of England loyally at court, without a great deal of recognition in return. But now Elizabeth was dead at the age of sixty-nine, and her crown would go to her first cousin twice removed: James VI of Scotland, James I of England.
Francis Bacon hoped for better things from the new king, but at the moment he had no particular ‘in’ at the English court. Forced to be patient, he began working on a philosophical project he’d had in mind for some years–a study of human knowledge that he intended to call Of the Proficience and Advancement of Learning, Divine and Human.
Like most of Bacon’s undertakings, the project was ridiculously ambitious. He set out to classify all learning into the proper branches and lay out all of the possible impediments to understanding. Part I condemned what he called the three ‘distempers’ of learning, which included ‘vain imaginations,’ pursuits such as astrology and alchemy that had no basis in actual fact; Part II divided all knowledge into three branches and suggested that natural philosophy should occupy the prime spot. Science, the project of understanding the universe, was the most important pursuit man could undertake. The study of history (‘everything that has happened’) and poesy (imaginative writings) took definite second and third places.
For a time, Bacon didn’t expand on these ideas. The Advancement of Learning opened with a fulsome dedication to James I (‘I have been touched–yea, and possessed–with an extreme wonder at those your virtues and faculties . . . the largeness of your capacity, the faithfulness of your memory, the swiftness of your apprehension, the penetration of your judgment, and the facility and order of your elocution …. There hath not been since Christ’s time any king or temporal monarch which hath been so learned in all literature and erudition, divine and human’), and this groveling soon yielded fruit. In 1607 Bacon was appointed as solicitor general, a position he had coveted for years, and over the next decade or so he poured his energies into his government responsibilities.
He did not return to natural philosophy until after his appointment to the even higher post of chancellor in 1618. Now that he had battled his way to the top of the political dirt pile, he announced his intentions to write a work with even greater scope–a new, complete system of philosophy that would shape the minds of men and guide them into new truths. He called this masterwork the Great Instauration: the Great Establishment, a whole new way of thinking, laid out in six parts.
Part I, a survey of the existing ‘ancient arts’ of the mind, repeated the arguments of the Advancement of Learning. But Part II, published in 1620 as a stand-alone work, was something entirely different. It was a wholesale challenge to Aristotelian methods, a brand-new ‘doctrine of a more perfect use of reason.’
Aristotelian thinking relies, heavily, on deductive reasoning for ancient logicians and philosophers, the highest and best road to the truth. Deductive reasoning moves from general statements (premises) to specific conclusions.
MAJOR PREMISE: All heavy matter falls toward the center of the universe. MINOR PREMISE: The earth is made of heavy matter. MINOR PREMISE: The earth is not falling. CONCLUSION: The earth must already be at the center of the universe.
But Bacon had come to believe that deductive reasoning was a dead end that distorted evidence: ‘Having first determined the question according to his will,’ he objected, ‘man then resorts to experience, and bending her to conformity with his placets [expressions of assent], leads her about like a captive in a procession.’ Instead, he argued, the careful thinker must reason the other way around: starting from specifics and building toward general conclusions, beginning with particular pieces of evidence and working, inductively, toward broader assertions.
This new way of thinking–inductive reasoning–had three steps to it. The ‘true method’ Bacon explained,
‘first lights the candle, and then by means of the candle shows the way; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms, and from established axioms again new experiments.’
In other words, the natural philosopher must first come up with an idea about how the world works: ‘lighting the candle.’ Second, he must test the idea against physical reality, against ‘experience duly ordered’–both observations of the world around him and carefully designed experiments. Only then, as a last step, should he ‘deduce axioms,’ coming up with a theory that could be claimed to carry truth.
Hypothesis, experiment, conclusion: Bacon had just traced the outlines of the scientific method…
Francis Bacon and the Scientific Method
An excerpt from The Story of Western Science by @SusanWiseBauer, via the invaluable @delanceyplace.
* Francis Bacon
###
As we embrace empiricism, we might send carefully-transmitted birthday greetings to Augusto Righi; he was born on this date in 1850. A physicist and a pioneer in the study of electromagnetism, he showed that showed that radio waves displayed characteristics of light wave behavior (reflection, refraction, polarization, and interference), with which they shared the electromagnetic spectrum. In 1894 Righi was the first person to generate microwaves.
Righi influenced the young Guglielmo Marconi, the inventor of radio, who visited him at his lab. Indeed, Marconi invented the first practical wireless telegraphy radio transmitters and receivers in 1894 using Righi’s four ball spark oscillator (from Righi’s microwave work) in his transmitters.
“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…
A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…
In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.
This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:
N14 + N14 ⇒ Mg24 + α + 17.7 MeV
The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?
Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.
Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.
But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.
At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.
… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.
[Ceglowski unpacks those assumptions…]
If you accept all these premises, what you get is disaster!
Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.
As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.
At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.
From there things get very sci-fi very quickly.
[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]
This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.
…
People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Is the idea of “superintelligence” just a memetic hazard?
When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.
The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.
Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.
But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.
The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.
[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]
The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.
In his book, Bostrom lists six things an AI would have to master to take over the world:
- Intelligence Amplification
- Strategizing
- Social manipulation
- Hacking
- Technology research
- Economic productivity
If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.
Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.
Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.
I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.
So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.
So what’s the answer? What’s the fix?
We need better scifi! And like so many things, we already have the technology…
[Ceglowski eaxplains– and demostrates– what he means…]
In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.
It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.
The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.
And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.
So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.
What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…
In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…
Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.
* John Rich
###
As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…









You must be logged in to post a comment.