(Roughly) Daily

Posts Tagged ‘Roman Inquisition

“Alchemy. The link between the immemorial magic arts and modern science. Humankind’s first systematic effort to unlock the secrets of matter by reproducible experiment.”*…

Science has entered a new era of alchemy, suggests Robbert Dijkgraaf, Director of the Institute for Advanced Study at Princeton– and, he argues, that’s a good thing…

Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives — from internet searches to social media feeds — the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing?

According to the prominent AI researcher Ali Rahimi and others, today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.

It’s true that we have little fundamental understanding of the inner workings of self-learning algorithms, or of the limits of their applications. These new forms of AI are very different from traditional computer codes that can be understood line by line. Instead, they operate within a black box, seemingly unknowable to humans and even to the machines themselves.

This discussion within the AI community has consequences for all the sciences. With deep learning impacting so many branches of current research — from drug discovery to the design of smart materials to the analysis of particle collisions — science itself may be at risk of being swallowed by a conceptual black box. It would be hard to have a computer program teach chemistry or physics classes. By deferring so much to machines, are we discarding the scientific method that has proved so successful, and reverting to the dark practices of alchemy?

Not so fast, says Yann LeCun, co-recipient of the 2018 Turing Award for his pioneering work on neural networks. He argues that the current state of AI research is nothing new in the history of science. It is just a necessary adolescent phase that many fields have experienced, characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach. It’s simply that we’re more familiar with its opposite.

After all, it’s easy to imagine knowledge flowing downstream, from the source of an abstract idea, through the twists and turns of experimentation, to a broad delta of practical applications. This is the famous “usefulness of useless knowledge,” advanced by Abraham Flexner in his seminal 1939 essay (itself a play on the very American concept of “useful knowledge” that emerged during the Enlightenment).

A canonical illustration of this flow is Albert Einstein’s general theory of relativity. It all began with the fundamental idea that the laws of physics should hold for all observers, independent of their movements. He then translated this concept into the mathematical language of curved space-time and applied it to the force of gravity and the evolution of the cosmos. Without Einstein’s theory, the GPS in our smartphones would drift off course by about 7 miles a day.

But maybe this paradigm of the usefulness of useless knowledge is what the Danish physicist Niels Bohr liked to call a “great truth” — a truth whose opposite is also a great truth. Maybe, as AI is demonstrating, knowledge can also flow uphill.

In the broad history of science, as LeCun suggested, we can spot many examples of this effect, which can perhaps be dubbed “the uselessness of useful knowledge.” An overarching and fundamentally important idea can emerge from a long series of step-by-step improvements and playful experimentation — say, from Fröbel to Nobel.

Perhaps the best illustration is the discovery of the laws of thermodynamics, a cornerstone of all branches of science. These elegant equations, describing the conservation of energy and increase of entropy, are laws of nature, obeyed by all physical phenomena. But these universal concepts only became apparent after a long, confusing period of experimentation, starting with the construction of the first steam engines in the 18th century and the gradual improvement of their design. Out of the thick mist of practical considerations, mathematical laws slowly emerged…

One could even argue that science itself has followed this uphill path. Until the birth of the methods and practices of modern research in the 17th century, scientific research consisted mostly of nonsystematic experimentation and theorizing. Long considered academic dead ends, these ancient practices have been reappraised in recent years: Alchemy is now considered to have been a useful and perhaps even necessary precursor to modern chemistry — more proto-science than hocus-pocus.

The appreciation of tinkering as a fruitful path toward grand theories and insights is particularly relevant for current research that combines advanced engineering and basic science in novel ways. Driven by breakthrough technologies, nanophysicists are tinkering away, building the modern equivalents of steam engines on the molecular level, manipulating individual atoms, electrons and photons. Genetic editing tools such as CRISPR allow us to cut and paste the code of life itself. With structures of unimaginable complexity, we are pushing nature into new corners of reality. With so many opportunities to explore new configurations of matter and information, we could enter a golden age of modern-day alchemy, in the best sense of the word.

However, we should never forget the hard-won cautionary lessons of history. Alchemy was not only a proto-science, but also a “hyper-science” that overpromised and underdelivered. Astrological predictions were taken so seriously that life had to adapt to theory, instead of the other way around. Unfortunately, modern society is not free from such magical thinking, putting too much confidence in omnipotent algorithms, without critically questioning their logical or ethical basis.

Science has always followed a natural rhythm of alternating phases of expansion and concentration. Times of unstructured exploration were followed by periods of consolidation, grounding new knowledge in fundamental concepts. We can only hope that the current period of creative tinkering in artificial intelligence, quantum devices and genetic editing, with its cornucopia of useful applications, will eventually lead to a deeper understanding of the world…

Today’s powerful but little-understood artificial intelligence breakthroughs echo past examples of unexpected scientific progress: “The Uselessness of Useful Knowledge,” from @RHDijkgraaf at @the_IAS.

Pair with: “Neuroscience’s Existential Crisis- we’re mapping the brain in amazing detail—but our brain can’t understand the picture” for a less optimistic view.

*  John Ciardi


As we experiment, we might recall that it was on this date in 1993 that the Roman Catholic Church admitted that it had erred in condemning Galileo.  For over 359 years, the Church had excoriated Galileo’s contentions (e.g., that the Earth revolves around the Sun) as anti-scriptural heresy.  In 1633, at age 69, Galileo had been forced by the Roman Inquisition to repent, and spent the last eight years of his life under house arrest.  After 13 years of inquiry, Pope John Paul II’s commission of historic, scientific and theological scholars brought the pontiff a “not guilty” finding for Galileo; the Pope himself met with the Pontifical Academy of Sciences to help correct the record.

Galileo (standing; white collar, dark smock) showing the Doge of Venice (seated) how to use the telescope. From a fresco by Giuseppe Bertini


“Life swarms with innocent monsters”*…


MS H.8, Fol. 191 verso, St. Martha taming the tarasque. St. Martha preaching (margin), and initial O, “Hours of Henry VIII”MS H.8, "Hours of Henry VIII,” book of hours, France, Tours, ca. 1500

“The Taming the Tarasque,” from Hours of Henry VIII, France, Tours, ca. 1500


From dragons and unicorns to mandrakes and griffins, monsters and medieval times are inseparable in the popular imagination. But medieval depictions of monsters—the subject of a fascinating new exhibition at the Morgan Library & Museum in Manhattan [which includes the image above]—weren’t designed simply to scare their viewers: They had many purposes, and provoked many reactions. They terrified, but they also taught. They enforced prejudices and social hierarchies, but they also inspired unlikely moments of empathy. They were medieval European propaganda, science, art, theology, and ethics all at once…

Finding the meaning in monsters: “The Symbols of Prejudice Hidden in Medieval Art.”

* Charles Baudelaire


As we decode dragons, we might recall that it was on this date in 1542, with Pope Paul III’s papal bull Licet ab initio, that the Roman Inquisition formally began.  In the tradition of the medieval inquisitions, and “inspired” by the Spanish Inquisition, the Roman Inquisition gave six cardinals six cardinals the power to arrest and imprison anyone suspected of heresy, to confiscate their property, and to put them to death.

While not so much in the prudish spirit of Savonarola’s “Bonfire of the Vanities,”  the Roman Inquisition– which lasted in the 18th century– was ruthless in rooting out what it considered dangerous deviations from orthodoxy.  Copernicus, Galileo, Giordano Bruno, and Cesare Cremonini were all persecuted.  While only Bruno was executed, the others were effectively (or actually) banished, and in the cases of Copernicus and Galileo, their works were placed on  the Index Librorum Prohibitorum (the Catholic Church’s Index of Forbidden Books).

inquisition source


Written by (Roughly) Daily

July 21, 2018 at 1:01 am

Atavistic Tendencies: what’s old is new…


A team of astrobiologists, working with a a group of oncologists, has suggested that cancer resembles ancient forms of life that flourished between 600 million and 1 billion years ago.  The genes that controlled the behavior of these early multicellular organisms still reside in our own cells, managed by more recent genes that keep them in check.  It’s when these newer “control genes” fail that the older mechanisms take over, the cell reverts to its earlier behaviors– and cancer does its growing-out-of-control damage.

Reporting in the journal Physical Biology, Paul Davies and and Charles Lineweaver explain

“Advanced” metazoan life of the form we now know, i.e. organisms with cell specialization and organ differentiation, was preceded by colonies of eukaryotic cells in which cellular cooperation was fairly rudimentary, consisting of networks of adhering cells exchanging information chemically, and forming self-organized assemblages with only a moderate division of labor…

So, they suggest, cancer isn’t an attack of “rogue cells,” evolving quickly to overpower normal biological-metabolic routines; it’s a kind of atavism, a throwback…  In conversation with Life Scientist, Lineweaver elaborates

Unlike bacteria and viruses, cancer has not developed the capacity to evolve into new forms. In fact, cancer is better understood as the reversion of cells to the way they behaved a little over one billion years ago, when[life was] nothing more than loose-knit colonies of only partially differentiated cells.

We think that the tumors that develop in cancer patients today take the same form as these simple cellular structures did more than a billion years ago…

The explanation makes a powerful kind of sense, at least at a systemic level: cancers occur in virtually all metazoans (with the exception of the altogether weird naked mole rat).  As Davies and Lineweaver note, “This quasi-ubiquity suggests that the mechanisms of cancer are deep-rooted in evolutionary history, a conjecture that receives support from both paleontology and genetics.”

The good news, Life Scientist observes, is that this means combating cancer is not necessarily as complex as if the cancers were rogue cells evolving new and novel defence mechanisms within the body.

Instead, because cancers fall back on the same evolved mechanisms that were used by early life, we can expect them to remain predictable, thus if they’re susceptible to treatment, it’s unlikely they’ll evolve new ways to get around it.

“Given cancer’s formidable complexity and diversity, how might one make progress toward controlling it? If the atavism hypothesis is correct, there are new reasons for optimism,” [Davies and Lineweaver] write.

[TotH to slashdot]


As we resist the impulse, remembering that there are other good reasons not to smoke, we might recall spare a thought for Giordano Bruno, the Dominican friar, philosopher, mathematician and astronomer whose concept of the infinite universe expanded on Copernicus’s model; he was the first European to understand the universe as a continuum where the stars we see at night are identical in nature to the Sun.  Bruno’s views were considered dangerously heretical by the (Roman) Inquisition, which imprisoned him in 1592; after eight years of refusals to recant, on this date in 1600, he was burned at the stake.

Giordano Bruno

Written by (Roughly) Daily

February 17, 2011 at 1:01 am

%d bloggers like this: