(Roughly) Daily

Posts Tagged ‘Galileo

“Alchemy. The link between the immemorial magic arts and modern science. Humankind’s first systematic effort to unlock the secrets of matter by reproducible experiment.”*…

Science has entered a new era of alchemy, suggests Robbert Dijkgraaf, Director of the Institute for Advanced Study at Princeton– and, he argues, that’s a good thing…

Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives — from internet searches to social media feeds — the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing?

According to the prominent AI researcher Ali Rahimi and others, today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.

It’s true that we have little fundamental understanding of the inner workings of self-learning algorithms, or of the limits of their applications. These new forms of AI are very different from traditional computer codes that can be understood line by line. Instead, they operate within a black box, seemingly unknowable to humans and even to the machines themselves.

This discussion within the AI community has consequences for all the sciences. With deep learning impacting so many branches of current research — from drug discovery to the design of smart materials to the analysis of particle collisions — science itself may be at risk of being swallowed by a conceptual black box. It would be hard to have a computer program teach chemistry or physics classes. By deferring so much to machines, are we discarding the scientific method that has proved so successful, and reverting to the dark practices of alchemy?

Not so fast, says Yann LeCun, co-recipient of the 2018 Turing Award for his pioneering work on neural networks. He argues that the current state of AI research is nothing new in the history of science. It is just a necessary adolescent phase that many fields have experienced, characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach. It’s simply that we’re more familiar with its opposite.

After all, it’s easy to imagine knowledge flowing downstream, from the source of an abstract idea, through the twists and turns of experimentation, to a broad delta of practical applications. This is the famous “usefulness of useless knowledge,” advanced by Abraham Flexner in his seminal 1939 essay (itself a play on the very American concept of “useful knowledge” that emerged during the Enlightenment).

A canonical illustration of this flow is Albert Einstein’s general theory of relativity. It all began with the fundamental idea that the laws of physics should hold for all observers, independent of their movements. He then translated this concept into the mathematical language of curved space-time and applied it to the force of gravity and the evolution of the cosmos. Without Einstein’s theory, the GPS in our smartphones would drift off course by about 7 miles a day.

But maybe this paradigm of the usefulness of useless knowledge is what the Danish physicist Niels Bohr liked to call a “great truth” — a truth whose opposite is also a great truth. Maybe, as AI is demonstrating, knowledge can also flow uphill.

In the broad history of science, as LeCun suggested, we can spot many examples of this effect, which can perhaps be dubbed “the uselessness of useful knowledge.” An overarching and fundamentally important idea can emerge from a long series of step-by-step improvements and playful experimentation — say, from Fröbel to Nobel.

Perhaps the best illustration is the discovery of the laws of thermodynamics, a cornerstone of all branches of science. These elegant equations, describing the conservation of energy and increase of entropy, are laws of nature, obeyed by all physical phenomena. But these universal concepts only became apparent after a long, confusing period of experimentation, starting with the construction of the first steam engines in the 18th century and the gradual improvement of their design. Out of the thick mist of practical considerations, mathematical laws slowly emerged…

One could even argue that science itself has followed this uphill path. Until the birth of the methods and practices of modern research in the 17th century, scientific research consisted mostly of nonsystematic experimentation and theorizing. Long considered academic dead ends, these ancient practices have been reappraised in recent years: Alchemy is now considered to have been a useful and perhaps even necessary precursor to modern chemistry — more proto-science than hocus-pocus.

The appreciation of tinkering as a fruitful path toward grand theories and insights is particularly relevant for current research that combines advanced engineering and basic science in novel ways. Driven by breakthrough technologies, nanophysicists are tinkering away, building the modern equivalents of steam engines on the molecular level, manipulating individual atoms, electrons and photons. Genetic editing tools such as CRISPR allow us to cut and paste the code of life itself. With structures of unimaginable complexity, we are pushing nature into new corners of reality. With so many opportunities to explore new configurations of matter and information, we could enter a golden age of modern-day alchemy, in the best sense of the word.

However, we should never forget the hard-won cautionary lessons of history. Alchemy was not only a proto-science, but also a “hyper-science” that overpromised and underdelivered. Astrological predictions were taken so seriously that life had to adapt to theory, instead of the other way around. Unfortunately, modern society is not free from such magical thinking, putting too much confidence in omnipotent algorithms, without critically questioning their logical or ethical basis.

Science has always followed a natural rhythm of alternating phases of expansion and concentration. Times of unstructured exploration were followed by periods of consolidation, grounding new knowledge in fundamental concepts. We can only hope that the current period of creative tinkering in artificial intelligence, quantum devices and genetic editing, with its cornucopia of useful applications, will eventually lead to a deeper understanding of the world…

Today’s powerful but little-understood artificial intelligence breakthroughs echo past examples of unexpected scientific progress: “The Uselessness of Useful Knowledge,” from @RHDijkgraaf at @the_IAS.

Pair with: “Neuroscience’s Existential Crisis- we’re mapping the brain in amazing detail—but our brain can’t understand the picture” for a less optimistic view.

*  John Ciardi

###

As we experiment, we might recall that it was on this date in 1993 that the Roman Catholic Church admitted that it had erred in condemning Galileo.  For over 359 years, the Church had excoriated Galileo’s contentions (e.g., that the Earth revolves around the Sun) as anti-scriptural heresy.  In 1633, at age 69, Galileo had been forced by the Roman Inquisition to repent, and spent the last eight years of his life under house arrest.  After 13 years of inquiry, Pope John Paul II’s commission of historic, scientific and theological scholars brought the pontiff a “not guilty” finding for Galileo; the Pope himself met with the Pontifical Academy of Sciences to help correct the record.

Galileo (standing; white collar, dark smock) showing the Doge of Venice (seated) how to use the telescope. From a fresco by Giuseppe Bertini

source

“Happy accidents are real gifts”*…

Fresco by Bertini, “Galileo and the Doge of Venice”

On the morning of July 25, 1610, Galileo pointed his telescope at Saturn and was surprised to find that it appeared to be flanked by two round blobs or bumps, one on either side. Unfortunately, Galileo’s telescope wasn’t quite advanced enough to pick out precisely what he had seen (his observations are now credited with being the earliest description of Saturn’s rings in astronomical history), but he nevertheless presumed that whatever he had seen was something special. And he wanted people to know about it.

Keen to announce his news and thereby secure credit for whatever it was he had discovered, Galileo sent letters to his friends and fellow astronomers. This being Galileo, the announcement was far from straightforward:

SMAISMRMILMEPOETALEUMIBUNENUGTTAUIRAS

Each message that Galileo sent out contained little more than that jumbled string of letters, which when rearranged correctly spelled out the Latin sentence, “altissimum planetam tergeminum observavi”—or “I have observed that the highest planet is threefold.”

As the outermost planet known to science at the time, Saturn was the “highest planet” in question. And unaware that he had discovered its rings, Galileo was merely suggesting to his contemporaries that he had found that the planet was somehow divided into three parts. Announcing such a discovery in the form of an anagram might have bought Galileo some time to continue his observations, however, but there was a problem: Anagrams can easily be misinterpreted.

One of those to whom Galileo sent a letter was the German scientist Johannes Kepler. A keen astronomer himself, Kepler had followed and supported Galileo’s work for several years, so when the coded letter arrived at his home in Prague he quickly set to work solving it. Unfortunately for him, he got it completely wrong.

Kepler rearranged Galileo’s word jumble as “salve, umbistineum geminatum Martia proles,” which he interpreted as “be greeted, double-knob, children of Mars.” His solution was far from perfect (umbistineum isn’t really a grammatical Latin word, for one thing), but Kepler was nevertheless convinced that, not only had he correctly solved the riddle, but Galileo’s apparent discovery proved a theory he had been contemplating for several months.

Earlier in 1610, Galileo had discovered the four so-called “Galilean moons” of Jupiter: Io, Europa, Ganymede and Callisto. Although we now know that Jupiter has several dozen moons of varying shapes, sizes, and orbits, at the time the announcement of just four natural satellites had led Kepler to presume that there must be a natural progression in the heavens: the Earth has one moon; Jupiter, two places further out from the Earth, has four; and sat between the two is Mars, which Kepler theorized must surely have two moons, to maintain the balanced celestial sequence 1, 2, 4 and so on (his only question was whether Saturn had six or eight).

Kepler got the anagram wrong, and the presumption that Jupiter only had four moons had been wrong. Yet as misguided as both these facts were, the assumption that Kepler made based on both of them—namely, that Mars had two moons—was entirely correct. Unfortunately for Kepler, his theory would not be proved until long after his death, as the two Martian moons Phobos and Deimos (named after Ares’s sons in Greek Mythology) were not discovered until 1877, by the American astronomer Asaph Hall.

Nevertheless, a misinterpretation of the anagram had accidentally predicted a major astronomical discovery of the 19th century, nearly 300 years before it occurred…

Serendipity in science: “How A Misinterpreted Anagram Predicted The Moons of Mars.”

(For an account of Isaac Newton’s use of anagrams in his scientific communications, see here.)

* David Lynch

###

As we code and decode, we might recall that it was on this date in 1781 that English astronomer William Herschel detected every schoolboy’s favorite planet, Uranus, in the night sky (though he initially thought it was a comet); it was the first planet to be classified as such with the aid of a telescope.  In fact, Uranus had been detected much earlier– but mistaken for a star: the earliest likely observation was by Hipparchos, who (in 128 BC) seems to have recorded the planet as a star for his star catalogue, later incorporated into Ptolemy’s Almagest.  The earliest definite sighting was in 1690 when John Flamsteed observed it at least six times, cataloguing it as the star 34 Tauri.

Herschel named the planet in honor of his King: Georgium Sidus (George’s Star), an unpopular choice, especially outside England; argument over alternatives ensued.  Berlin astronomer Johann Elert Bode came up with the moniker “Uranus,” which was adopted throughout the world’s astronomical community by 1850.

 Uranus, photographed by Voyager 2 in 1986.

 source

Written by (Roughly) Daily

March 13, 2021 at 1:01 am

“Facts alone, no matter how numerous or verifiable, do not automatically arrange themselves into an intelligible, or truthful, picture of the world. It is the task of the human mind to invent a theoretical framework to account for them.”*…

PPPL physicist Hong Qin in front of images of planetary orbits and computer code

… or maybe not. A couple of decades ago, your correspondent came across a short book that aimed to explain how we think know what we think know, Truth– a history and guide of the perplexed, by Felipe Fernández-Armesto (then, a professor of history at Oxford; now, at Notre Dame)…

According to Fernández-Armesto, people throughout history have sought to get at the truth in one or more of four basic ways. The first is through feeling. Truth is a tangible entity. The third-century B.C. Chinese sage Chuang Tzu stated, ”The universe is one.” Others described the universe as a unity of opposites. To the fifth-century B.C. Greek philosopher Heraclitus, the cosmos is a tension like that of the bow or the lyre. The notion of chaos comes along only later, together with uncomfortable concepts like infinity.

Then there is authoritarianism, ”the truth you are told.” Divinities can tell us what is wanted, if only we can discover how to hear them. The ancient Greeks believed that Apollo would speak through the mouth of an old peasant woman in a room filled with the smoke of bay leaves; traditionalist Azande in the Nilotic Sudan depend on the response of poisoned chickens. People consult sacred books, or watch for apparitions. Others look inside themselves, for truths that were imprinted in their minds before they were born or buried in their subconscious minds.

Reasoning is the third way Fernández-Armesto cites. Since knowledge attained by divination or introspection is subject to misinterpretation, eventually people return to the use of reason, which helped thinkers like Chuang Tzu and Heraclitus describe the universe. Logical analysis was used in China and Egypt long before it was discovered in Greece and in India. If the Greeks are mistakenly credited with the invention of rational thinking, it is because of the effective ways they wrote about it. Plato illustrated his dialogues with memorable myths and brilliant metaphors. Truth, as he saw it, could be discovered only by abstract reasoning, without reliance on sense perception or observation of outside phenomena. Rather, he sought to excavate it from the recesses of the mind. The word for truth in Greek, aletheia, means ”what is not forgotten.”

Plato’s pupil Aristotle developed the techniques of logical analysis that still enable us to get at the knowledge hidden within us. He examined propositions by stating possible contradictions and developed the syllogism, a method of proof based on stated premises. His methods of reasoning have influenced independent thinkers ever since. Logicians developed a system of notation, free from the associations of language, that comes close to being a kind of mathematics. The uses of pure reason have had a particular appeal to lovers of force, and have flourished in times of absolutism like the 17th and 18th centuries.

Finally, there is sense perception. Unlike his teacher, Plato, and many of Plato’s followers, Aristotle realized that pure logic had its limits. He began with study of the natural world and used evidence gained from experience or experimentation to support his arguments. Ever since, as Fernández-Armesto puts it, science and sense have kept time together, like voices in a duet that sing different tunes. The combination of theoretical and practical gave Western thinkers an edge over purer reasoning schemes in India and China.

The scientific revolution began when European thinkers broke free from religious authoritarianism and stopped regarding this earth as the center of the universe. They used mathematics along with experimentation and reasoning and developed mechanical tools like the telescope. Fernández-Armesto’s favorite example of their empirical spirit is the grueling Arctic expedition in 1736 in which the French scientist Pierre Moreau de Maupertuis determined (rightly) that the earth was not round like a ball but rather an oblate spheroid…

source

One of Fernández-Armesto most basic points is that our capacity to apprehend “the truth”– to “know”– has developed throughout history. And history’s not over. So, your correspondent wondered, mightn’t there emerge a fifth source of truth, one rooted in the assessment of vast, ever-more-complete data maps of reality– a fifth way of knowing?

Well, those days may be upon us…

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

he algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a ‘serving algorithm,’ then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”

The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless ‘translate’ a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

“I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”

Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” [Qin’s collaborator Eric] Palmerduca said…

But then, as Edwin Hubble observed, “observations always involve theory,” theory that’s implicit in the particulars and the structure of the data being collected and fed to the AI. So, perhaps this is less a new way of knowing, than a new way of enhancing Fernández-Armesto’s third way– reason– as it became the scientific method…

The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”

In either case: “New machine learning theory raises questions about nature of science.”

Francis Bello

###

As we experiment with epistemology, we might send carefully-observed and calculated birthday greetings to Georg Joachim de Porris (better known by his professional name, Rheticus; he was born on this date in 1514. A mathematician, astronomer, cartographer, navigational-instrument maker, medical practitioner, and teacher, he was well-known in his day for his stature in all of those fields. But he is surely best-remembered as the sole pupil of Copernicus, whose work he championed– most impactfully, facilitating the publication of his master’s De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres)… and informing the most famous work by yesterday’s birthday boy, Galileo.

source

“Immigrants, we get the job done”*…

When the Piccirilli Brothers arrived in New York from Italy in 1888, they brought with them skill, artistry, and passion for stone-carving unrivaled in the United States. At their studio at 467 East 142nd Street, in the Mott Haven Section of the Bronx, the brothers turned monumental slabs of marble into some of the nation’s recognizable icons, including the senate pediment of the US Capitol Building and the statue of Abraham Lincoln that sits resolutely in the Lincoln Memorial on the National Mall.

The Piccirillis not only helped set our national narrative in stone but they also left an indelible mark on New York City. They carved hundreds of commissions around the five boroughs, including the 11 figures in the pediment of the New York Stock exchange, the “four continents” adorning the Customs House at Bowling Green, the two stately lions that guard the New York Public Library, both statues of George Washington for the Arch at Washington Square, and upwards of 500 individual carvings at Riverside Church…

The remarkable story of a remarkable family: “How six Italian immigrants from the South Bronx carved some of the nation’s most iconic sculptures.” 

* Lin-Manuel Miranda (as Hamilton, to Lafayette in Hamilton)

###

As we celebrate sculpture, we might wish a grateful Happy Birthday to another son of Italy, Galileo Galilei, the physicist, mathematician, astronomer, and philosopher who, with Francis Bacon, pioneered the Scientific Method; he was born on this date in 1564.  It was Galileo’s observations that gave conclusive support to Copernicus’ heliocentric theory of the solar system.

Tintoretto’s portrait of Galileo

  source

“We forced our opponents to change their minds”*…

 

Change

 

There are those who say this pandemic shouldn’t be politicised. That doing so is tantamount to basking in self-righteousness. Like the religious hardliner shouting it’s the wrath of God, or the populist scaremongering about the “Chinese virus”, or the trend-watcher predicting we’re finally entering a new era of love, mindfulness, and free money for all.

There are also those who say now is precisely the time to speak out. That the decisions being made at this moment will have ramifications far into the future. Or, as Obama’s chief of staff put it after Lehman Brothers fell in 2008: “You never want a serious crisis to go to waste.”

In the first few weeks, I tended to side with the naysayers. I’ve written before about the opportunities crises present, but now it seemed tactless, even offensive. Then more days passed. Little by little, it started to dawn that this crisis might last months, a year, even longer. And that anti-crisis measures imposed temporarily one day could well become permanent the next.

No one knows what awaits us this time. But it’s precisely because we don’t know because the future is so uncertain, that we need to talk about it…

In a crisis, what was once unthinkable can suddenly become inevitable. We’re in the middle of the biggest societal shakeup since the second world war…

In a fundamentally optimistic essay, historian Rutger Bregman peers through the Overton Window to explain the seemingly-sudden ripening of ideas that seemed impossible just months ago: “The neoliberal era is ending. What comes next?

See also: “Bruno Latour: ‘This is a global catastrophe that has come from within’.”

And for some (more) historical context, in the form of a scientist’s computer model that tracks “cycles” he has detected in the U.S. since 1780– culminating (so far) in his prediction in Nature in 2010 that 2020 would see huge unrest– see “This Researcher Predicted 2020 Would Be Mayhem. Here’s What He Says May Come Next.”

* Margaret Thatcher in 2002, alluding to Tony Blair and New Labour when asked what she saw as her great achievement.  (N.B., as the piece excerpted above explains, in 2020, Bernie Sanders’s “moderate” rival Joe Biden is proposing tax increases

###

As we buckle up, we might recall that it was on this date in 1633 that Galileo delivered his Fourth (and final) Deposition to the court of the Inquisition, which had raised theological objections to his heliocentric view of the solar system (for the second time, he had been tried in 1616 for the same offense, and both censured and censored– his books were banned).  This second trial, occasioned by his publication of Dialogue Concerning the Two Chief World Systems, which resurfaced his heliocentric view, ended the following day, when the Inquisitor issued these rulings:

 

  • Galileo was found “vehemently suspect of heresy”, namely of having held the opinions that the Sun lies motionless at the center of the universe, that the Earth is not at its center and moves, and that one may hold and defend an opinion as probable after it has been declared contrary to Holy Scripture.  He was required to “abjure, curse, and detest” those opinions.
  • He was sentenced to formal imprisonment at the pleasure of the Inquisition.  (On the following day this was commuted to house arrest, under which he remained for the rest of his life.)
  • His offending Dialogue was banned; and in an action not announced at the trial, publication of any of his works was forbidden, including any he might write in the future

300px-Galileo_before_the_Holy_Office

Galileo before the Holy Office, a 19th-century painting by Joseph-Nicolas Robert-Fleury

source

 

Written by (Roughly) Daily

June 18, 2020 at 1:01 am

%d bloggers like this: