(Roughly) Daily

Posts Tagged ‘Martin Gardner

“Machines take me by surprise with great frequency”*…

In search of universals in the 17th century, Gottfried Leibniz imagined the calculus ratiocinator, a theoretical logical calculation framework aimed at universal application, that led Norbert Wiener suggested that Leibniz should be considered the patron saint of cybernetics. In the 19th century, Charles Babbage and Ada Lovelace took a pair of whacks at making it real.

Ironically, it was confronting the impossibility of a universal calculator that led to modern computing. In 1936 (the same year that Charlie Chaplin released Modern Times) Alan Turing (following on Godel’s demonstration that mathematics is incomplete and addressing Hilbert‘s “decision problem,” querying the limits of computation) published the (notional) design of a “machine” that elegantly demonstrated those limits– and, as Sheon Han explains, birthed computing as we know it…

… [Hilbert’s] question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.

The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It’s abstract because it doesn’t (and can’t) physically exist as a tangible device. Instead, it’s a conceptual model of computation: If the machine can calculate a function, then the function is computable.

With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process — a set of instructions, which today we’d call an algorithm — that can always determine whether a given statement is true?…

… in 1936, Church and Turing — using different methods — independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway’s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.

Beyond answering these fundamental questions, Turing’s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today’s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture — called the von Neumann architecture — that made the universal Turing machine concept possible in a real-life machine…

As Turing said, “if a machine is expected to be infallible, it cannot also be intelligent.” On the importance of thought experiments: “The Most Important Machine That Was Never Built,” from @sheonhan in @QuantaMagazine.

* Alan Turing

###

As we sum it up, we might spare a thought for Martin Gardner; he died on this date in 2010.  Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

 source

Written by (Roughly) Daily

May 22, 2023 at 1:00 am

“Intelligence is really a kind of taste: taste in ideas.”*…

On the attribution of intelligence…

A new study published in the Christmas issue of the British Medical Journal — the annual fun one — sought to settle once and for all which phrase to describe a simple task is more deserved, “It’s not brain surgery” or “It’s not rocket science.” They did this by administering an intelligence test to 329 aerospace engineers and 72 neurosurgeons. Turns out it’s conditional: the rocket scientists and neurosurgeons are pretty much evenly matched, though the aerospace engineers were better at mental manipulations while the brain surgeons were better at semantic problem solving. That said, no significant difference was found between the aerospace engineers and the control population, while the same held among the neurosurgeons, although they did have a speedier problem solving time that was statistically significant. That said, the paper’s authors contend maybe pedestaling this kind of niche intellect is overall discouraging to people given the results, so I think the obvious compromise is that we all agree to just change to, “Well, it’s not exactly blogging about MoviePass,” to honor the real titans of our day…

Not Exactly Brain Surgery,” from Walt Hickey (@WaltHickey) in his essential Numlock News (@NumlockAM).

Read the underlying research here.

* Susan Sontag

###

As we get smart, we might send polymathic birthday greetings to Piet Hein; he was born on this date in 1905. A mathematician, inventor, designer, author and poet, his short poems, known as gruks (or grooks), first started to appear in the daily newspaper Politiken shortly after the German occupation of Denmark in April 1940 under the pseudonym “Kumbel ”tombstone’] Kumbell.” He invented the Soma cube and the board game Hex, and designed the famous “super ellipse” traffic circle in Stockholm.

Before the war Hein had, in his own words, “played mental ping-pong” with Niels Bohr. After the war Hein was a close associate of Martin Gardner and his work was frequently featured in Gardner’s Mathematical Games column in Scientific American. At the age of 95 Gardner wrote Hein’s “autobiography” and titled it Undiluted Hocus-Pocus. Both the title and the dedication of the book come from one of Hein’s grooks.

Piet Hein, standing in front of the Hans Christian Anderson statue in Copenhagen

source

“Nature is pleased with simplicity”*…

As Clare Booth Luce once said, sometimes “simplicity is the ultimate sophistication”…

… The uniformity of the cosmic microwave background (CMB) tells us that, at its birth, ‘the Universe has turned out to be stunningly simple,’ as Neil Turok, director emeritus of the Perimeter Institute for Theoretical Physics in Ontario, Canada, put it at a public lecture in 2015. ‘[W]e don’t understand how nature got away with it,’ he added. A few decades after Penzias and Wilson’s discovery, NASA’s Cosmic Background Explorer satellite measured faint ripples in the CMB, with variations in radiation intensity of less than one part in 100,000. That’s a lot less than the variation in whiteness you’d see in the cleanest, whitest sheet of paper you’ve ever seen.

Wind forward 13.8 billion years, and, with its trillions of galaxies and zillions of stars and planets, the Universe is far from simple. On at least one planet, it has even managed to generate a multitude of life forms capable of comprehending both the complexity of our Universe and the puzzle of its simple origins. Yet, despite being so rich in complexity, some of these life forms, particularly those we now call scientists, retain a fondness for that defining characteristic of our primitive Universe: simplicity.

The Franciscan friar William of Occam (1285-1347) wasn’t the first to express a preference for simplicity, though he’s most associated with its implications for reason. The principle known as Occam’s Razor insists that, given several accounts of a problem, we should choose the simplest. The razor ‘shaves off’ unnecessary explanations, and is often expressed in the form ‘entities should not be multiplied beyond necessity’. So, if you pass a house and hear barking and purring, then you should think a dog and a cat are the family pets, rather than a dog, a cat and a rabbit. Of course, a bunny might also be enjoying the family’s hospitality, but the existing data provides no support for the more complex model. Occam’s Razor says that we should keep models, theories or explanations simple until proven otherwise – in this case, perhaps until sighting a fluffy tail through the window.

Seven hundred years ago, William of Occam used his razor to dismantle medieval science or metaphysics. In subsequent centuries, the great scientists of the early modern era used it to forge modern science. The mathematician Claudius Ptolemy’s (c100-170 CE) system for calculating the motions of the planets, based on the idea that the Earth was at the centre, was a theory of byzantine complexity. So, when Copernicus (1473-1543) was confronted by it, he searched for a solution that ‘could be solved with fewer and much simpler constructions’. The solution he discovered – or rediscovered, as it had been proposed in ancient Greece by Aristarchus of Samos, but then dismissed by Aristotle – was of course the solar system, in which the planets orbit around the Sun. Yet, in Copernicus’s hands, it was no more accurate than Ptolemy’s geocentric system. Copernicus’s only argument in favour of heliocentricity was that it was simpler.

Nearly all the great scientists who followed Copernicus retained Occam’s preference for simple solutions. In the 1500s, Leonardo da Vinci insisted that human ingenuity ‘will never devise any [solutions] more beautiful, nor more simple, nor more to the purpose than Nature does’. A century or so later, his countryman Galileo claimed that ‘facts which at first seem improbable will, even on scant explanation, drop the cloak which has hidden them and stand forth in naked and simple beauty.’ Isaac Newton noted in his Principia (1687) that ‘we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances’; while in the 20th century Einstein is said to have advised that ‘Everything should be made as simple as possible, but not simpler.’ In a Universe seemingly so saturated with complexity, what work does simplicity do for us?

Part of the answer is that simplicity is the defining feature of science. Alchemists were great experimenters, astrologers can do maths, and philosophers are great at logic. But only science insists on simplicity…

Just why do simpler laws work so well? The statistical approach known as Bayesian inference, after the English statistician Thomas Bayes (1702-61), can help explain simplicity’s power. Bayesian inference allows us to update our degree of belief in an explanation, theory or model based on its ability to predict data. To grasp this, imagine you have a friend who has two dice. The first is a simple six-sided cube, and the second is more complex, with 60 sides that can throw 60 different numbers. Suppose your friend throws one of the dice in secret and calls out a number, say 5. She asks you to guess which dice was thrown. Like astronomical data that either the geocentric or heliocentric system could account for, the number 5 could have been thrown by either dice. Are they equally likely? Bayesian inference says no, because it weights alternative models – the six- vs the 60-sided dice – according to the likelihood that they would have generated the data. There is a one-in-six chance of a six-sided dice throwing a 5, whereas only a one-in-60 chance of the 60-sided dice throwing a 5. Comparing likelihoods, then, the six-sided dice is 10 times more likely to be the source of the data than the 60-sided dice.

Simple scientific laws are preferred, then, because, if they fit or fully explain the data, they’re more likely to be the source of it.

In my latest book, I propose a radical, if speculative, solution for why the Universe might in fact be as simple as it’s possible to be. Its starting point is the remarkable theory of cosmological natural selection (CNS) proposed by the physicist Lee Smolin. CNS proposes that, just like living creatures, universes have evolved through a cosmological process, analogous to natural selection.

Smolin came up with CNS as a potential solution to what’s called the fine-tuning problem: how the fundamental constants and parameters, such as the masses of the fundamental particles or the charge of an electron, got to be the precise values needed for the creation of matter, stars, planets and life. CNS first notes the apparent symmetry between the Big Bang, in which stars and particles were spewed out of a dimensionless point at the birth of our Universe, and the Big Crunch, the scenario for the end of our Universe when a supermassive black hole swallows up stars and particles before vanishing back into a dimensionless point. This symmetry has led many cosmologists to propose that black holes in our Universe might be the ‘other side’ of Big Bangs of other universes, expanding elsewhere. In this scenario, time did not begin at the Big Bang, but continues backwards through to the death of its parent universe in a Big Crunch, through to its birth from a black hole, and so on, stretching backward in time, potentially into infinity. Not only that but, since our region of the Universe is filled with an estimated 100 billion supermassive black holes, Smolin proposes that each is the progenitor of one of 100 billion universes that have descended from our own.

The model Smolin proposed includes a kind of universal self-replication process, with black holes acting as reproductive cells. The next ingredient is heredity. Smolin proposes that each offspring universe inherits almost the same fundamental constants of its parent. The ‘almost’ is there because Smolin suggests that, in a process analogous to mutation, their values are tweaked as they pass through a black hole, so baby universes become slightly different from their parent. Lastly, he imagines a kind of cosmological ecosystem in which universes compete for matter and energy. Gradually, over a great many cosmological generations, the multiverse of universes would become dominated by the fittest and most fecund universes, through their possession of those rare values of the fundamental constants that maximise black holes, and thereby generate the maximum number of descendant universes.

Smolin’s CNS theory explains why our Universe is finely tuned to make many black holes, but it does not account for why it is simple. I have my own explanation of this, though Smolin himself is not convinced. First, I point out that natural selection carries its own Occam’s Razor that removes redundant biological features through the inevitability of mutations. While most mutations are harmless, those that impair vital functions are normally removed from the gene pool because the individuals carrying them leave fewer descendants. This process of ‘purifying selection’, as it’s known, maintains our genes, and the functions they encode, in good shape.

However, if an essential function becomes redundant, perhaps by a change of environment, then purifying selection no longer works. For example, by standing upright, our ancestors lifted their noses off the ground, so their sense of smell became less important. This means that mutations could afford to accumulate in the newly dispensable genes, until the functions they encoded were lost. For us, hundreds of smell genes accumulated mutations, so that we lost the ability to detect hundreds of odours that we no longer need to smell. This inevitable process of mutational pruning of inessential functions provides a kind of evolutionary Occam’s Razor that removes superfluous biological complexity.

Perhaps a similar process of purifying selection operates in cosmological natural selection to keep things simple…

It’s unclear whether the kind of multiverse envisaged by Smolin’s theory is finite or infinite. If infinite, then the simplest universe capable of forming black holes will be infinitely more abundant than the next simplest universe. If instead the supply of universes is finite, then we have a similar situation to biological evolution on Earth. Universes will compete for available resources – matter and energy – and the simplest that convert more of their mass into black holes will leave the most descendants. For both scenarios, if we ask which universe we are most likely to inhabit, it will be the simplest, as they are the most abundant. When inhabitants of these universes peer into the heavens to discover their cosmic microwave background and perceive its incredible smoothness, they, like Turok, will remain baffled at how their universe has managed to do so much from such a ‘stunningly simple’ beginning.

The cosmological razor idea has one further startling implication. It suggests that the fundamental law of the Universe is not quantum mechanics, or general relativity or even the laws of mathematics. It is the law of natural selection discovered by Darwin and Wallace. As the philosopher Daniel Dennett insisted, it is ‘The single best idea anyone has ever had.’ It might also be the simplest idea that any universe has ever had.

Does the existence of a multiverse hold the key for why nature’s laws seem so simple? “Why simplicity works,” from JohnJoe McFadden (@johnjoemcfadden)

* “Nature does nothing in vain when less will serve; for Nature is pleased with simplicity and affects not the pomp of superfluous causes.” – Isaac Newton, The Mathematical Principles of Natural Philosophy

###

As we emphasize the essential, we might spare a thought for Martin Gardner; he died on this date in 2010. Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

 source

“Mathematics, rightly viewed, possesses not only truth, but supreme beauty”*…

Readers will know of your correspondent’s deep affection and respect for Martin Gardner (c.f., e.g., here), so will understand his inability to pass up this appreciation:

You may think that the most interesting man in the world has a scraggly gray beard, drinks Mexican beer, and hangs out with women half his age. But you’re dead wrong. I discovered the real deal, the authentic most interesting man in the world, on the shelves on my local public library when I was a freshman in high school. His name was Martin Gardner.

I first stumbled upon Gardner’s work while rummaging around a bottom shelf in the rear of the library, right below my favorite book in the building, Jean Hugard’s The Royal Road to Card Magic. The Scientific American Book of Mathematical Puzzles and Diversions, published by Gardner in 1959, represented a big leap from Hugard, yet I devoured as much of it as my 14-year-old mind could comprehend. Much of the math was too advanced for me, but the parts I understood charmed and delighted me. I came back the next week to check out The Second Scientific American Book of Mathematical Puzzles and Diversions. I followed up with Gardner’s The Numerology of Dr. Matrix and Unexpected Hangings, also on the shelves on the library, and soon purchased a copy of his Fads and Fallacies in the Name of Science at a used bookstore. Around this same time, I bought, at great expense, a brand new hardbound copy of 536 Curious Problems and Puzzles by Henry Ernest Dudeney, and learned that this treasure trove of strange and peculiar diversions had been edited by (yes, you guessed it) Martin Gardner. I felt like shouting out: “Mama, there’s that man again!”

Later I learned that Gardner’s expertise extended far beyond math and science. I can’t even begin to explain my delight when I discovered that Gardner fraternized with magicians. During my teen years, I spent countless hours practicing card tricks and sleights-of-hand — I never realized my ambition of performing as a card magician, but the finger dexterity later helped when I switched my focus to playing jazz piano — and I was thrilled to learn that Gardner knew Dai Vernon, Frank Garcia, Paul Curry, Ed Marlo, and other masters of playing card prestidigitation. They were not household names. In my mind, someone like Dai Vernon was way too cool to be known by the uninitiated. But these were precisely the kind of mysterious masterminds of obscure arts that Martin Gardner would include among his buddies.

And finally as a humanities student at Stanford I learned about Martin Gardner’s contributions as literary critic and scholar. His annotated guide to Lewis Carroll is a classic work of textual deconstruction (although Gardner would never have used that term), and my boyhood hero also applied his sharp analytical mind to deciphering the works of Samuel Taylor Coleridge, G.K. Chesterton, and L. Frank Baum. I could continue the list, but you get the idea. Whatever your interests — whether the theory of relativity or “Jabberwocky,” the prisoner’s dilemma or a mean bottom deal from a clean deck, Martin Gardner was your man. He ranks among the greatest autodidacts and polymaths of the 20th century. Or, as I prefer to say, he was the most interesting man in the world, the fellow I would invite to that mythical dinner party where all parties, living or dead, are compelled to accept your invitation…

Read on for Ted Gioia‘s (@tedgioia) appreciation of Gardner’s autobiogrphical works: “Martin Gardner: The Most Interesting Man in the World.”

* Bertrand Russell

###

As we add it up, we might send carefully-calculated birthday greetings to J. H. C. (Henry) Whitehead; he was born on this date in 1904. A mathematician (and nephew of Alfred North Whitehead), he was a topographer, one of the founders of homotopy theory, an approach to mapping of topological spaces.

Born in Chennai and educated at Oxford and Princeton, he joined the codebreakers at Bletchley Park during World War II and by 1945 was one of some fifteen mathematicians working in the “Newmanry,” a section headed by Max Newman that was responsible for breaking a German teleprinter cipher using machine methods– which included the use of Colossus machines, early digital electronic computers.

He spent the rest of his career at Oxford (where he was Waynflete Professor of Pure Mathematics at Magdalen College). He served as president of the London Mathematical Society, which created two prizes in his memory: the annually-awarded Whitehead Prize and the biennially-awarded Senior Whitehead Prize.

source

“Beauty is the first test: there is no permanent place in the world for ugly mathematics”*…

 

Long-time readers will know of your correspondent’s admiration and affection for Martin Gardner (c.f., e.g., here and here).  So imagine his delight to learn from @MartyKrasney of this…

Martin wrote about 300 articles for Scientific American between 1952 and 1998, most famously in his legendary “Mathematical Games” column starting in Jan 1957. Many of those articles are now viewed as classics, from his seminal piece on hexaflexagons in Dec 1956—which led to the offer to write a regular column for the magazine—to his breakthrough essays on pentomnoes, rep-tiles, the Soma cube, the art of Escher, the fourth dimension, sphere packing, Conway’s game of Life, Newcomb’s paradox, Mandelbrot’s fractals, Penrose tiles, and RSA cryptography, not forgetting the recurring numerological exploits of his alter ego Dr. Matrix, and the tongue-in-cheek April Fool column from 1975.

Many of those gems just listed were associated with beautiful graphics and artwork, so it’s no surprise that Martin scored some Scientific American covers over the years, though as we’ll see below, there’s surprisingly little overlap between his “greatest hits” and his “cover stories.”

It’s worth noting that, just as the magazine editors selected the titles under which his original articles appeared—he generally ditched those in favor of his own when he republished them in the spin-off books—artwork submitted was often altered by Scientific American staff artists…

The full dozen, replete with the cover art, at “A Gardner’s Dozen—Martin’s Scientific American Cover Stories.”

* G.H. Hardy

###

As we agree with G.K Chesterton that “the difference between the poet and the mathematician is that the poet tries to get his head into the heavens while the mathematician tries to get the heavens into his head,” we might send carefully calculated birthday greetings to John Charles Fields, he was born on this date in 1863.  A mathematician of accomplishment, he is better remembered as a tireless advocate of the field and its importance– and best remembered as the founder of the award posthumously named for him:  The Fields Medal, familiarly known as “the Nobel of mathematics.”

 source

 

Written by (Roughly) Daily

May 14, 2017 at 1:01 am

%d bloggers like this: