(Roughly) Daily

Posts Tagged ‘Galileo

“There is only one good, knowledge, and one evil, ignorance”*…

The School of Athens (1509–1511) by Raphael, depicting famous classical Greek philosophers (source)

If only it were that simple. Trevor Klee unpacks the travails of Galileo to illustrate the way that abstractions become practical “knowledge”…

… We’re all generally looking for the newest study, or the most up-to-date review. At the very least, we certainly aren’t looking through ancient texts for scientific truths.

This might seem obvious to you. Of course you’d never look at an old paper. That old paper was probably done with worse instruments and worse methods. Just because something’s old or was written by someone whose name you recognize doesn’t mean that it’s truthful.

But why is it obvious to you? Because you live in a world that philosophy built. The standards for truth that you imbibed as a child are not natural standards of truth. If you had been an educated person in 1200s Europe, your standard for truth would have been what has stood the test of time. You would have lived among the ruins of Rome and studied the anatomy texts of the Greek, known that your society could produce neither of those, and concluded that they knew something that your society could not. Your best hope would then be to simply copy them as best as possible.

This was less true by the time Galileo was alive. This is why an educated man like Galileo would have entertained the idea that he knew better than the ancient Greeks, and why his ideas found some purchase among his fellow academicians (including the then Pope, actually). But still, there was a prominent train of thought that promoted the idea that a citation from Aristotle was worth more than a direct observation from a telescope.

But you live in a different world now. You live in a world in which the science of tomorrow is better than the science of today, and our societal capabilities advance every year. We can build everything the ancients did and stuff they never even imagined possible. So you respect tradition less, and respect what is actually measured most accurately in the physical world more.

Today, this battle over truth is so far in the past that we don’t even know it was ever a battle. The closest we come to this line of reasoning is when new age medicine appeals to “ancient wisdom”, but even they feel compelled to quote studies. Even more modern battles are mostly settled, like the importance of randomized, double-blinded controlled studies over non-randomized, non-controlled studies.

The reason we mark battles is not just for fun or historical curiosity. It’s to remind us that what we take for granted was actually fought for by generations before us. And, it’s to make sure that we know the importance of teaching these lessons so thoroughly that future generations take them for granted as well. A world in which nobody would dream of established theory overturning actual empirical evidence is a better world than the one that Galileo lived in…

On the importance of understanding the roots of our understanding: “You live in a world that philosophy built,” from @trevor_klee via @ByrneHobart.

Apposite (in an amusing way): “Going Against The Grain Weevils,” on Aristotle’s Generation of Animals and household pests.

* Socrates, from Diogenes Laertius, Lives and Opinions of Eminent Philosophers (probably early third century BCE)

###

As we examine epistemology, we might send elegantly phrased and eclectic birthday greetings to Persian polymath Omar Khayyam; the philosopher, mathematician, astronomer, epigrammatist, and poet was born on this date in 1048. While he’s probably best known to English-speakers as a poet, via Edward FitzGerald’s famous translation of (what he called) the Rubaiyat of Omar Khayyam, Fitzgerald’s attribution of the book’s poetry to Omar (as opposed to the aphorisms and other quotes in the volume) is now questionable to many scholars (who believe those verses to be by several different Persian authors).

In any case, Omar was unquestionably one of the major philosophers, mathematicians and astronomers of the medieval period.  He is the author of one of the most important treatises on algebra written before modern times, the Treatise on Demonstration of Problems of Algebra, which includes a geometric method for solving cubic equations by intersecting a hyperbola with a circle.  His astronomical observations contributed to the reform of the Persian calendar.  And he made important contributions to mechanics, geography, mineralogy, music, climatology and Islamic theology.

 source

Written by (Roughly) Daily

May 15, 2023 at 1:00 am

“Look up at the stars and not down at your feet. Try to make sense of what you see, and wonder about what makes the universe exist.”*…

Our environment…

This map shows a slice of our Universe. It was created from astronomical data taken night after night over a period of 15 years using a telescope in New Mexico, USA. We are located at the bottom. At the top is the actual edge of the observable Universe. In between, we see about 200,000 galaxies.

Each tiny dot is a galaxy. About 200,000 are shown with their actual position and color. Each galaxy contains billions of stars and planets. We are located at the bottom. Our galaxy, the Milky Way, is just a dot. Looking up, we see that space is filled with galaxies forming a global filamentary structure. Far away from us (higher up in the map), the filaments become harder to see…

For a (much crisper, more vivid, and interactive) version: “The Map of the Observable Universe.”

* Stephen Hawking

###

As we explore, we might recall that it was on this date in 1616 that The Minutes of the Roman Inquisition recorded the conclusion that Galileo’s writings in support of Copernicus’ heliocentric view of the solar system were “foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture.” The next day, Galileo was called before Cardinal Bellarmine, who (on Pope Paul V’s instruction) ordered Galileo to abandon the teaching. Shortly thereafter, Copernicus’s De Revolutionibus and other heliocentric works were banned (entered onto the Index Librorum Prohibitorum) “until correction.”

Sixteen years later, Galileo “published” Dialogue Concerning the Two Chief World Systems (Dialogo sopra i due massimi sistemi del mondo)– that’s to say, he presented the first copy to his patron, Ferdinando II de’ Medici, Grand Duke of Tuscany.  Dialogue, which compared the heliocentric Copernican and the traditional geo-centric Ptolemaic systems, was an immediate best-seller.

While there was no copyright available to Galileo, his book was printed and distributed under a license from the Inquisition.  Still, the following year it was deemed heretical– and he joined Copernicus on the Index Librorum Prohibitorum: the publication of anything else Galileo had written or ever might write was also banned… a ban that remained in effect until 1835.

Domenico Tintoretto‘s portrait of Galileo (source)

Written by (Roughly) Daily

February 25, 2023 at 1:00 am

“No law of nature, however general, has been established all at once; its recognition has always been preceded by many presentiments.”*…

Laws of nature are impossible to break, and nearly as difficult to define. Just what kind of necessity do they possess?

… The natural laws limit what can happen. They are stronger than the laws of any country because it is impossible to violate them. If it is a law of nature that, for example, no object can be accelerated from rest to beyond the speed of light, then it is not merely that such accelerations never occur. They cannot occur.

There are many things that never actually happen but could have happened in that their occurrence would violate no law of nature. For instance, to borrow an example from the philosopher Hans Reichenbach (1891-1953), perhaps in the entire history of the Universe there never was nor ever will be a gold cube larger than one mile on each side. Such a large gold cube is not impossible. It just turns out never to exist. It’s like a sequence of moves that is permitted by the rules of chess but never takes place in the entire history of chess-playing. By contrast, if it is a law of nature that energy is never created or destroyed, then it is impossible for the total energy in the Universe to change. The laws of nature govern the world like the rules of chess determine what is permitted and what is forbidden during a game of chess, in an analogy drawn by the biologist T H Huxley (1825-95).

Laws of nature differ from one another in many respects. Some laws concern the general structure of spacetime, while others concern some specific inhabitant of spacetime (such as the law that gold doesn’t rust). Some laws relate causes to their effects (as Coulomb’s law relates electric charges to the electric forces they cause). But other laws (such as the law of energy conservation or the spacetime symmetry principles) do not specify the effects of any particular sort of cause. Some laws involve probabilities (such as the law specifying the half-life of some radioactive isotope). And some laws are currently undiscovered – though I can’t give you an example of one of those! (By ‘laws of nature’, I will mean the genuine laws of nature that science aims to discover, not whatever scientists currently believe to be laws of nature.)

What all of the various laws have in common, despite their diversity, is that it is necessary that everything obey them. It is impossible for them to be broken. An object must obey the laws of nature…

But although all these truisms about the laws of nature sound plausible and familiar, they are also imprecise and metaphorical. The natural laws obviously do not ‘govern’ the Universe in the way that the rules of chess govern a game of chess. Chess players know the rules and so deliberately conform to them, whereas inanimate objects do not know the laws of nature and have no intentions.

Scientists discover laws of nature by acquiring evidence that some apparent regularity is not only never violated but also could never have been violated. For instance, when every ingenious effort to create a perpetual-motion machine turned out to fail, scientists concluded that such a machine was impossible – that energy conservation is a natural law, a rule of nature’s game rather than an accident. In drawing this conclusion, scientists adopted various counterfactual conditionals, such as that, even if they had tried a different scheme, they would have failed to create a perpetual-motion machine. That it is impossible to create such a machine (because energy conservation is a law of nature) explains why scientists failed every time they tried to create one.

Laws of nature are important scientific discoveries. Their counterfactual resilience enables them to tell us about what would have happened under a wide range of hypothetical circumstances. Their necessity means that they impose limits on what is possible. Laws of nature can explain why something failed to happen by revealing that it cannot happen – that it is impossible.

We began with several vague ideas that seem implicit in scientific reasoning: that the laws of nature are important to discover, that they help us to explain why things happen, and that they are impossible to break. Now we can look back and see that we have made these vague ideas more precise and rigorous. In doing so, we found that these ideas are not only vindicated, but also deeply interconnected. We now understand better what laws of nature are and why they are able to play the roles that science calls upon them to play.

What is a Law of Nature?,” Marc Lange explains in @aeonmag.

* Dmitri Mendeleev (creator of the Periodic Table)

###

As we study law, we might send inquisitive birthday greetings to Federico Cesi; he was born on this date in 1585. A scientist and naturalist, he is best remembered as the founder of the Accademia dei Lincei (Lincean Academy), often cited as the first modern scientific society. Cesi coined (or at least was first to publish/disseminate) the word “telescope” to denote the instrument used by Galileo– who was the sixth member of the Lincean Academy.

source

“Alchemy. The link between the immemorial magic arts and modern science. Humankind’s first systematic effort to unlock the secrets of matter by reproducible experiment.”*…

Science has entered a new era of alchemy, suggests Robbert Dijkgraaf, Director of the Institute for Advanced Study at Princeton– and, he argues, that’s a good thing…

Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives — from internet searches to social media feeds — the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing?

According to the prominent AI researcher Ali Rahimi and others, today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.

It’s true that we have little fundamental understanding of the inner workings of self-learning algorithms, or of the limits of their applications. These new forms of AI are very different from traditional computer codes that can be understood line by line. Instead, they operate within a black box, seemingly unknowable to humans and even to the machines themselves.

This discussion within the AI community has consequences for all the sciences. With deep learning impacting so many branches of current research — from drug discovery to the design of smart materials to the analysis of particle collisions — science itself may be at risk of being swallowed by a conceptual black box. It would be hard to have a computer program teach chemistry or physics classes. By deferring so much to machines, are we discarding the scientific method that has proved so successful, and reverting to the dark practices of alchemy?

Not so fast, says Yann LeCun, co-recipient of the 2018 Turing Award for his pioneering work on neural networks. He argues that the current state of AI research is nothing new in the history of science. It is just a necessary adolescent phase that many fields have experienced, characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach. It’s simply that we’re more familiar with its opposite.

After all, it’s easy to imagine knowledge flowing downstream, from the source of an abstract idea, through the twists and turns of experimentation, to a broad delta of practical applications. This is the famous “usefulness of useless knowledge,” advanced by Abraham Flexner in his seminal 1939 essay (itself a play on the very American concept of “useful knowledge” that emerged during the Enlightenment).

A canonical illustration of this flow is Albert Einstein’s general theory of relativity. It all began with the fundamental idea that the laws of physics should hold for all observers, independent of their movements. He then translated this concept into the mathematical language of curved space-time and applied it to the force of gravity and the evolution of the cosmos. Without Einstein’s theory, the GPS in our smartphones would drift off course by about 7 miles a day.

But maybe this paradigm of the usefulness of useless knowledge is what the Danish physicist Niels Bohr liked to call a “great truth” — a truth whose opposite is also a great truth. Maybe, as AI is demonstrating, knowledge can also flow uphill.

In the broad history of science, as LeCun suggested, we can spot many examples of this effect, which can perhaps be dubbed “the uselessness of useful knowledge.” An overarching and fundamentally important idea can emerge from a long series of step-by-step improvements and playful experimentation — say, from Fröbel to Nobel.

Perhaps the best illustration is the discovery of the laws of thermodynamics, a cornerstone of all branches of science. These elegant equations, describing the conservation of energy and increase of entropy, are laws of nature, obeyed by all physical phenomena. But these universal concepts only became apparent after a long, confusing period of experimentation, starting with the construction of the first steam engines in the 18th century and the gradual improvement of their design. Out of the thick mist of practical considerations, mathematical laws slowly emerged…

One could even argue that science itself has followed this uphill path. Until the birth of the methods and practices of modern research in the 17th century, scientific research consisted mostly of nonsystematic experimentation and theorizing. Long considered academic dead ends, these ancient practices have been reappraised in recent years: Alchemy is now considered to have been a useful and perhaps even necessary precursor to modern chemistry — more proto-science than hocus-pocus.

The appreciation of tinkering as a fruitful path toward grand theories and insights is particularly relevant for current research that combines advanced engineering and basic science in novel ways. Driven by breakthrough technologies, nanophysicists are tinkering away, building the modern equivalents of steam engines on the molecular level, manipulating individual atoms, electrons and photons. Genetic editing tools such as CRISPR allow us to cut and paste the code of life itself. With structures of unimaginable complexity, we are pushing nature into new corners of reality. With so many opportunities to explore new configurations of matter and information, we could enter a golden age of modern-day alchemy, in the best sense of the word.

However, we should never forget the hard-won cautionary lessons of history. Alchemy was not only a proto-science, but also a “hyper-science” that overpromised and underdelivered. Astrological predictions were taken so seriously that life had to adapt to theory, instead of the other way around. Unfortunately, modern society is not free from such magical thinking, putting too much confidence in omnipotent algorithms, without critically questioning their logical or ethical basis.

Science has always followed a natural rhythm of alternating phases of expansion and concentration. Times of unstructured exploration were followed by periods of consolidation, grounding new knowledge in fundamental concepts. We can only hope that the current period of creative tinkering in artificial intelligence, quantum devices and genetic editing, with its cornucopia of useful applications, will eventually lead to a deeper understanding of the world…

Today’s powerful but little-understood artificial intelligence breakthroughs echo past examples of unexpected scientific progress: “The Uselessness of Useful Knowledge,” from @RHDijkgraaf at @the_IAS.

Pair with: “Neuroscience’s Existential Crisis- we’re mapping the brain in amazing detail—but our brain can’t understand the picture” for a less optimistic view.

*  John Ciardi

###

As we experiment, we might recall that it was on this date in 1993 that the Roman Catholic Church admitted that it had erred in condemning Galileo.  For over 359 years, the Church had excoriated Galileo’s contentions (e.g., that the Earth revolves around the Sun) as anti-scriptural heresy.  In 1633, at age 69, Galileo had been forced by the Roman Inquisition to repent, and spent the last eight years of his life under house arrest.  After 13 years of inquiry, Pope John Paul II’s commission of historic, scientific and theological scholars brought the pontiff a “not guilty” finding for Galileo; the Pope himself met with the Pontifical Academy of Sciences to help correct the record.

Galileo (standing; white collar, dark smock) showing the Doge of Venice (seated) how to use the telescope. From a fresco by Giuseppe Bertini

source

“Happy accidents are real gifts”*…

Fresco by Bertini, “Galileo and the Doge of Venice”

On the morning of July 25, 1610, Galileo pointed his telescope at Saturn and was surprised to find that it appeared to be flanked by two round blobs or bumps, one on either side. Unfortunately, Galileo’s telescope wasn’t quite advanced enough to pick out precisely what he had seen (his observations are now credited with being the earliest description of Saturn’s rings in astronomical history), but he nevertheless presumed that whatever he had seen was something special. And he wanted people to know about it.

Keen to announce his news and thereby secure credit for whatever it was he had discovered, Galileo sent letters to his friends and fellow astronomers. This being Galileo, the announcement was far from straightforward:

SMAISMRMILMEPOETALEUMIBUNENUGTTAUIRAS

Each message that Galileo sent out contained little more than that jumbled string of letters, which when rearranged correctly spelled out the Latin sentence, “altissimum planetam tergeminum observavi”—or “I have observed that the highest planet is threefold.”

As the outermost planet known to science at the time, Saturn was the “highest planet” in question. And unaware that he had discovered its rings, Galileo was merely suggesting to his contemporaries that he had found that the planet was somehow divided into three parts. Announcing such a discovery in the form of an anagram might have bought Galileo some time to continue his observations, however, but there was a problem: Anagrams can easily be misinterpreted.

One of those to whom Galileo sent a letter was the German scientist Johannes Kepler. A keen astronomer himself, Kepler had followed and supported Galileo’s work for several years, so when the coded letter arrived at his home in Prague he quickly set to work solving it. Unfortunately for him, he got it completely wrong.

Kepler rearranged Galileo’s word jumble as “salve, umbistineum geminatum Martia proles,” which he interpreted as “be greeted, double-knob, children of Mars.” His solution was far from perfect (umbistineum isn’t really a grammatical Latin word, for one thing), but Kepler was nevertheless convinced that, not only had he correctly solved the riddle, but Galileo’s apparent discovery proved a theory he had been contemplating for several months.

Earlier in 1610, Galileo had discovered the four so-called “Galilean moons” of Jupiter: Io, Europa, Ganymede and Callisto. Although we now know that Jupiter has several dozen moons of varying shapes, sizes, and orbits, at the time the announcement of just four natural satellites had led Kepler to presume that there must be a natural progression in the heavens: the Earth has one moon; Jupiter, two places further out from the Earth, has four; and sat between the two is Mars, which Kepler theorized must surely have two moons, to maintain the balanced celestial sequence 1, 2, 4 and so on (his only question was whether Saturn had six or eight).

Kepler got the anagram wrong, and the presumption that Jupiter only had four moons had been wrong. Yet as misguided as both these facts were, the assumption that Kepler made based on both of them—namely, that Mars had two moons—was entirely correct. Unfortunately for Kepler, his theory would not be proved until long after his death, as the two Martian moons Phobos and Deimos (named after Ares’s sons in Greek Mythology) were not discovered until 1877, by the American astronomer Asaph Hall.

Nevertheless, a misinterpretation of the anagram had accidentally predicted a major astronomical discovery of the 19th century, nearly 300 years before it occurred…

Serendipity in science: “How A Misinterpreted Anagram Predicted The Moons of Mars.”

(For an account of Isaac Newton’s use of anagrams in his scientific communications, see here.)

* David Lynch

###

As we code and decode, we might recall that it was on this date in 1781 that English astronomer William Herschel detected every schoolboy’s favorite planet, Uranus, in the night sky (though he initially thought it was a comet); it was the first planet to be classified as such with the aid of a telescope.  In fact, Uranus had been detected much earlier– but mistaken for a star: the earliest likely observation was by Hipparchos, who (in 128 BC) seems to have recorded the planet as a star for his star catalogue, later incorporated into Ptolemy’s Almagest.  The earliest definite sighting was in 1690 when John Flamsteed observed it at least six times, cataloguing it as the star 34 Tauri.

Herschel named the planet in honor of his King: Georgium Sidus (George’s Star), an unpopular choice, especially outside England; argument over alternatives ensued.  Berlin astronomer Johann Elert Bode came up with the moniker “Uranus,” which was adopted throughout the world’s astronomical community by 1850.

 Uranus, photographed by Voyager 2 in 1986.

 source

Written by (Roughly) Daily

March 13, 2021 at 1:01 am

%d bloggers like this: