Posts Tagged ‘computation’
“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.”*…
… Indeed, the same might be said of life itself. David Krakauer and Chris Kempes of the Santa Fe Institute suggest that life is starting to look a lot less like an outcome of chemistry and physics, and more like a computational process…
… Today, doubts about conventional explanations of life are growing and a wave of new general theories has emerged to better define our origins. These suggest that life doesn’t only depend on amino acids, DNA, proteins and other forms of matter. Today, it can be digitally simulated, biologically synthesised or made from entirely different materials to those that allowed our evolutionary ancestors to flourish. These and other possibilities are inviting researchers to ask more fundamental questions: if the materials for life can radically change – like the materials for computation – what stays the same? Are there deeper laws or principles that make life possible?
Our planet appears to be exceptionally rare. Of the thousands that have been identified by astronomers, only one has shown any evidence of life. Earth is, in the words of Carl Sagan, a ‘lonely speck in the great enveloping cosmic dark.’ This apparent loneliness is an ongoing puzzle faced by scientists studying the origin and evolution of life: how is it possible that only one planet has shown incontrovertible evidence of life, even though the laws of physics are shared by all known planets, and the elements in the periodic table can be found across the Universe?
The answer, for many, is to accept that Earth really is as unique as it appears: the absence of life elsewhere in the Universe can be explained by accepting that our planet is physically and chemically unlike the many other planets we have formally identified. Only Earth, so the argument goes, produced the special material conditions conducive to our rare chemistry, and it did so around 4 billion years ago, when life first emerged.
In 1952, Stanley Miller and his supervisor Harold Urey provided the first experimental evidence for this idea through a series of experiments at the University of Chicago. The Miller-Urey experiment, as it became known, sought to recreate the atmospheric conditions of early Earth through laboratory equipment, and to test whether organic compounds (amino acids) could be created in a reconstructed inorganic environment. When their experiment succeeded, the emergence of life became bound to the specific material conditions and chemistry on our planet, billions of years ago.
However, more recent research suggests there are likely countless other possibilities for how life might emerge through potential chemical combinations. As the British chemist Lee Cronin, the American theoretical physicist Sara Walker and others have recently argued, seeking near-miraculous coincidences of chemistry can narrow our ability to find other processes meaningful to life. In fact, most chemical reactions, whether they take place on Earth or elsewhere in the Universe, are not connected to life. Chemistry alone is not enough to identify whether something is alive, which is why researchers seeking the origin of life must use other methods to make accurate judgments.
Today, ‘adaptive function’ is the primary criterion for identifying the right kinds of biotic chemistry that give rise to life, as the theoretical biologist Michael Lachmann (our colleague at the Santa Fe Institute) likes to point out. In the sciences, adaptive function refers to an organism’s capacity to biologically change, evolve or, put another way, solve problems. ‘Problem-solving’ may seem more closely related to the domains of society, culture and technology than to the domain of biology. We might think of the problem of migrating to new islands, which was solved when humans learned to navigate ocean currents, or the problem of plotting trajectories, which our species solved by learning to calculate angles, or even the problem of shelter, which we solved by building homes. But genetic evolution also involves problem-solving. Insect wings solve the ‘problem’ of flight. Optical lenses that focus light solve the ‘problem’ of vision. And the kidneys solve the ‘problem’ of filtering blood. This kind of biological problem-solving – an outcome of natural selection and genetic drift – is conventionally called ‘adaptation’. Though it is crucial to the evolution of life, new research suggests it may also be crucial to the origins of life.
This problem-solving perspective is radically altering our knowledge of the Universe…
The idea of life asa kind of computational process has roots that go back to the 4th century BCE, when Aristotle introduced his philosophy of hylomorphism in which functions take precedence over forms. For Aristotle, abilities such as vision were less about the biological shape and matter of eyes and more about the function of sight. It took around 2,000 years for his idea of hylomorphic functions to evolve into the idea of adaptive traits through the work of Charles Darwin and others. In the 19th century, these naturalists stopped defining organisms by their material components and chemistry, and instead began defining traits by focusing on how organisms adapted and evolved – in other words, how they processed and solved problems. It would then take a further century for the idea of hylomorphic functions to shift into the abstract concept of computation through the work of Alan Turing [and here] and the earlier ideas of Charles Babbage [here].
In the 1930s, Turing became the first to connect the classical Greek idea of function to the modern idea of computation, but his ideas were impossible without the work of Babbage, a century before. Important for Turing was the way Babbage had marked the difference between calculating devices that follow fixed laws of operation, which Babbage called ‘Difference Engines’, and computing devices that follow programmable laws of operation, which he called ‘Analytical Engines.’
Using Babbage’s distinction, Turing developed the most general model of computation: the universal Turing Machine…
Turing did not describe any of the materials out of which such a machine would be built. He had little interest in chemistry beyond the physical requirement that a computer store, read and write bits reliably. That is why, amazingly, this simple (albeit infinite) programmable machine is an abstract model of how our powerful modern computers work. But the theory of computation Turing developed can also be understood as a theory of life. Both computation and life involve a minimal set of algorithms that support adaptive function. These ‘algorithms’ help materials process information, from the rare chemicals that build cells to the silicon semiconductors of modern computers. And so, as some research suggests, a search for life and a search for computation may not be so different. In both cases, we can be side-tracked if we focus on materials, on chemistry, physical environments and conditions.
In response to these concerns, a set of diverse ideas has emerged to explain life anew, through principles and processes shared with computation, rather than the rare chemistry and early Earth environments simulated in the Miller-Urey experiment. What drives these ideas, developed over the past 60 years by researchers working in disparate disciplines – including physics, computer science, astrobiology, synthetic biology, evolutionary science, neuroscience and philosophy – is a search for the fundamental principles that drive problem-solving matter. Though researchers have been working in disconnected fields and their ideas seem incommensurable, we believe there are broad patterns to their research on the origins of life. However, it can be difficult for outsiders to understand how these seemingly incommensurable ideas are connected to each other or why they are significant. This is why we have set out to review and organise these new ways of thinking.
Their proposals can be grouped into three distinct categories, three hypotheses, which we have named Tron, Golem and Maupertuis…
[The authors unpack all three proposals…]
… Is life problem-solving matter? When thinking about our biotic origins, it is important to remember that most chemical reactions are not connected to life, whether they take place here or elsewhere in the Universe. Chemistry alone is not enough to identify life. Instead, researchers use adaptive function – a capacity for solving problems – as the primary evidence and filter for identifying the right kinds of biotic chemistry. If life is problem-solving matter, our origins were not a miraculous or rare event governed by chemical constraints but, instead, the outcome of far more universal principles of information and computation. And if life is understood through these principles, then perhaps it has come into existence more often than we previously thought, driven by problems as big as the bang that started our abiotic universe moving 13.8 billion years ago.
The physical account of the origin and evolution of the Universe is a purely mechanical affair, explained through events such as the Big Bang, the formation of light elements, the condensation of stars and galaxies, and the formation of heavy elements. This account doesn’t involve objectives, purposes, or problems. But the physics and chemistry that gave rise to life appear to have been doing more than simply obeying the fundamental laws. At some point in the Universe’s history, matter became purposeful. It became organised in a way that allowed it to adapt to its immediate environment. It evolved from a Babbage-like Difference Engine into a Turing-like Analytical Engine. This is the threshold for the origin of life.
In the abiotic universe, physical laws, such as the law of gravitation, are like ‘calculations’ that can be performed everywhere in space and time through the same basic input-output operations. For living organisms, however, the rules of life can be modified or ‘programmed’ to solve unique biological problems – these organisms can adapt themselves and their environments. That’s why, if the abiotic universe is a Difference Engine, life is an Analytical Engine. This shift from one to the other marks the moment when matter became defined by computation and problem-solving. Certainly, specialised chemistry was required for this transition, but the fundamental revolution was not in matter but in logic.
In that moment, there emerged for the first time in the history of the Universe a big problem to give the Big Bang a run for its money. To discover this big problem – to understand how matter has been able to adapt to a seemingly endless range of environments – many new theories and abstractions for measuring, discovering, defining and synthesising life have emerged in the past century. Some researchers have synthesised life in silico. Others have experimented with new forms of matter. And others have discovered new laws that may make life as inescapable as physics…
Eminently worth reading in full: “Problem-solving matter,” from @sfiscience and @aeonmag.
Pair with “At the limits of thought” (also by Krakauer).
* Albert Einstein
###
As we obsess on ontology, we might spare a thought for someone concerned with life as it is lived: Sigismund Schlomo “Sigmund” Freud; he died on this date in 1939. A neurologist, he was the founder of psychoanalysis– a clinical method for evaluating and treating pathologies seen as originating from conflicts in the psyche, through dialogue between patient and psychoanalyst, and the distinctive theory of mind and human agency derived from it.
“One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”*…
And yet, for centuries no one has succeeded. Now, as Erica Klarreich reports, cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible…
Many computer scientists focus on overcoming hard computational problems. But there’s one area of computer science in which hardness is an asset: cryptography, where you want hard obstacles between your adversaries and your secrets.
Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment.
To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible?
In 1995, Russell Impagliazzo of the University of California, San Diego broke down the question of hardness into a set of sub-questions that computer scientists could tackle one piece at a time. To summarize the state of knowledge in this area, he described five possible worlds — fancifully named Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania — with ascending levels of hardness and cryptographic possibility. Any of these could be the world we live in…
Explore each of them– and their implications for secure encryption– at “Which Computational Universe Do We Live In?” from @EricaKlarreich in @QuantaMagazine.
###
As we contemplate codes, we might we might send communicative birthday greetings to a frequently–featured hero of your correspondent, Claude Elwood Shannon; he was born on this date in 1916. A mathematician, electrical engineer– and cryptographer– he is known as “the father of information theory.” But he is also remembered for his contributions to digital circuit design theory and for his cryptanalysis work during World War II, both as a codebreaker and as a designer of secure communications systems.

“Visualization gives you answers to questions you didn’t know you had”*…
Physical representations of data have existed for thousands of years. The List of Physical Visualizations (and the accompanying Gallery) collect illustrative examples, e.g…
5500 BC – Mesopotamian Clay Tokens
The earliest data visualizations were likely physical: built by arranging stones or pebbles, and later, clay tokens. According to an eminent archaeologist (Schmandt-Besserat, 1999):
“Whereas words consist of immaterial sounds, the tokens were concrete, solid, tangible artifacts, which could be handled, arranged and rearranged at will. For instance, the tokens could be ordered in special columns according to types of merchandise, entries and expenditures; donors or recipients. The token system thus encouraged manipulating data by abstracting all possible variables. (Harth 1983. 19) […] No doubt patterning, the presentation of data in a particular configuration, was developed to highlight special items (Luria 1976. 20).”
Clay tokens suggest that physical objects were used to externalize information, support visual thinking and enhance cognition way before paper and writing were invented…
There are 370 entries (so far). Browse them at List of Physical Visualizations (@dataphys)
###
As we celebrate the concrete, we might carefully-calculated birthday greetings to Rolf Landauer; he was born on this date in 1927. A physicist, he made a number important contributions in a range of areas: the thermodynamics of information processing, condensed matter physics, and the conductivity of disordered media.
He is probably best remembered for “Landauer’s Principle,” which described the energy used during a computer’s operation. Whenever the machine is resetting for another computation, bits are flushed from the computer’s memory, and in that electronic operation, a certain amount of energy is lost (a simple logical consequence of the second law of thermodynamics). Thus, when information is erased, there is an inevitable “thermodynamic cost of forgetting,” which governs the development of more energy-efficient computers. The maximum entropy of a bounded physical system is finite– so while most engineers dealt with practical limitations of compacting ever more circuitry onto tiny chips, Landauer considered the theoretical limit: if technology improved indefinitely, how soon will it run into the insuperable barriers set by nature?
A so-called logically reversible computation, in which no information is erased, may in principle be carried out without releasing any heat. This has led to considerable interest in the study of reversible computing. Indeed, without reversible computing, increases in the number of computations per joule of energy dissipated must eventually come to a halt. If Koomey‘s law continues to hold, the limit implied by Landauer’s principle would be reached around the year 2050.





You must be logged in to post a comment.