(Roughly) Daily

Posts Tagged ‘Barbara McClintock

“The most important factor in survival is neither intelligence nor strength but adaptability”*…

A close-up image of a nautilus, showcasing its spiral shell and soft body in a vibrant underwater setting.
A new mathematical model showed that evolutionary bursts led to the emergence of almost all characteristic cephalopod traits such as tentacles.

Indeed. Scientists have accepted this precept since Charles Darwin‘s publication of Origin of the Species. But how– and at what pace– does that adaptation happen? From those earliest days, the assumption was that change/adaptation happened slowly, roughly evenly– gradually– over time.

But in 1972, paleontologists Niles Eldredge and Stephen Jay Gould published a landmark paper developing their theory and called it punctuated equilibria. Their paper built upon Ernst Mayr‘s model of geographic speciation, I. M. Lerner‘s theories of developmental and genetic homeostasis, and their own empirical research. Eldredge and Gould proposed that the degree of gradualism commonly attributed to Darwin is virtually nonexistent in the fossil record, and that stasis dominates the history of most fossil species. Rather, they argued, when significant evolutionary change occurs, it is generally restricted to rare and geologically rapid events of branching speciation called cladogenesis (the process by which a species splits into two distinct species, rather than one species gradually transforming into another).

Jake Buhler reports on recent work that confirms the punctuated equilibrium theory and adds more detail…

Over the last half-billion years, squid, octopuses and their kin have evolved much like a fireworks display, with long, anticipatory pauses interspersed with intense, explosive changes. The many-armed diversity of cephalopods is the result of the evolutionary rubber hitting the road right after lineages split into new species, and precious little of their evolution has been the slow accumulation of gradual change.

They aren’t alone. Sudden accelerations spring from the crooks of branches in evolutionary trees, across many scales of life — seemingly wherever there’s a branching system of inherited modifications — in a dynamic not examined in traditional evolutionary models.

That’s the perspective emerging from a new mathematical framework (opens a new tab) published in Proceedings of the Royal Society B that describes the pace of evolutionary change. The new model, part of a roughly 50-year-long reimagining of evolution’s tempo, is rooted in the concept of punctuated equilibrium, which was introduced by the paleontologists Niles Eldredge and Stephen Jay Gould in 1972.

“Species would just sit still in the fossil record for millions of years, and then all of a sudden — bang! — they would turn into something else,” explained Mark Pagel, an evolutionary biologist at the University of Reading in the United Kingdom.

Punctuated equilibrium was initially a controversial proposal. The theory diverged from the dominant, century-long view that evolution adhered to a slow, steady pace of Darwinian gradualism, in which species incrementally and almost imperceptibly developed into new ones. It opened the confounding possibility that there was a discontinuity between the selection processes behind the microevolutionary changes that occur within a population and those driving the long-term, broad-scale changes that take place higher than the species level, known as macroevolution.

In the decades since, researchers have continued to debate these views as they’ve gathered more data: Paleontologists have accumulated fossil datasets tracing macroevolutionary changes in ancient lineages, while molecular biologists have reconstructed microevolution on a more compressed timescale — in DNA and the proteins they encode.

Now there are enough datasets to more fully test the theories of evolutionary change. Recently, a team of scientists blended insights from several evolutionary models with new methods to build a mathematical framework that better captures real evolutionary processes. When the team applied their tools to a selection of evolutionary datasets (including their own data from research into an ancient protein family), they found that evolutionary spikes weren’t just common, but somewhat predictably clustered at the forks in the evolutionary tree.

Their model showed that proteins contort themselves into new iterations more rapidly around the time they diverge from each other. Human languages twist and recast themselves at the bifurcations in their own family tree. Cephalopods’ soft bodies sprout arms and bloom with suckers at these same splits.

The new study adds to previous support for the punctuated equilibrium phenomenon, said Pagel, who wasn’t involved in the project. However, the rapid evolutionary behavior isn’t a unique process separate from natural selection, as Eldredge and Gould suggested, but rather the result of periods of extremely rapid adaptation propelling evolutionary change.

“This is really a rather beautiful story in the philosophy of science,” Pagel said…

Read on for the fascinating story of the updated evolutionary model shows that living systems evolve in a split-and-hit-the-gas dynamic, where new lineages appear in sudden bursts rather than during a long marathon of gradual changes: “The Sudden Surges That Forge Evolutionary Trees,” from @jakebuehler.bsky.social‬ in @quantamagazine.bsky.social‬.

Given the strains that the Antropocene is putting on our environment, this could be timely…

* Charles Darwin

###

As we dissect development, we might spare a thought for Barabara McClintock; she died on this date in 1992. A cytogeneticist, she is regarded as one of the most important figures in the history of genetics. In the 1940s and 50s McClintock’s work on the cytogenetics of maize led her to theorize that genes are transposable – they can move around – on and between chromosomes. McClintock drew this inference by observing changing patterns of coloration in maize kernels over generations of controlled crosses. The idea that genes could move did not seem to fit with what was then known about genes, but improved molecular techniques of the late 1970s and early 1980s allowed other scientists to confirm her discovery. She was awarded the 1983 Nobel Prize in Physiology or Medicine, the first American woman to win an unshared Nobel Prize.

For more on McClintock’s work and its legacy, see here and here.

Black and white photo of a woman with glasses working in a laboratory, using a microscope and examining samples.

source

Written by (Roughly) Daily

September 3, 2025 at 1:00 am

“Memory resides not just in brains but in every cell”*…

An artistic representation of a cell illustrated with two faces merging in its center, surrounded by cellular structures like mitochondria and various organelles, set against a gradient background of soft colors.

As the redoubtable Claire L. Evans [and here] reports, a small but enthusiastic group of neuroscientists is exhuming overlooked experiments and performing new ones to explore whether cells record past experiences — fundamentally challenging our understanding of what memory is…

In 1983, the octogenarian geneticist Barbara McClintock stood at the lectern of the Karolinska Institute in Stockholm. She was famously publicity averse — nearly a hermit — but it’s customary for people to speak when they’re awarded a Nobel Prize, so she delivered a halting account of the experiments that had led to her discovery, in the early 1950s, of how DNA sequences can relocate across the genome. Near the end of the speech, blinking through wire-framed glasses, she changed the subject, asking: “What does a cell know of itself?”

McClintock had a reputation for eccentricity. Still, her question seemed more likely to come from a philosopher than a plant geneticist. She went on to describe lab experiments in which she had seen plant cells respond in a “thoughtful manner.” Faced with unexpected stress, they seemed to adjust in ways that were “beyond our present ability to fathom.” What does a cell know of itself? It would be the work of future biologists, she said, to find out.

Forty years later, McClintock’s question hasn’t lost its potency. Some of those future biologists are now hard at work unpacking what “knowing” might mean for a single cell, as they hunt for signs of basic cognitive phenomena — like the ability to remember and learn — in unicellular creatures and nonneural human cells alike. Science has long taken the view that a multicellular nervous system is a prerequisite for such abilities, but new research is revealing that single cells, too, keep a record of their experiences for what appear to be adaptive purposes.

In a provocative study published in Nature Communications late last year, the neuroscientist Nikolay Kukushkin and his mentor Thomas J. Carew at New York University showed that human kidney cells growing in a dish can “remember” patterns of chemical signals when they’re presented at regularly spaced intervals — a memory phenomenon common to all animals, but unseen outside the nervous system until now. Kukushkin is part of a small but enthusiastic cohort of researchers studying “aneural,” or brainless, forms of memory. What does a cell know of itself? So far, their research suggests that the answer to McClintock’s question might be: much more than you think…

[Evans explains the prevailing wisdom, outlines the experiments that have challenged it, and unpacks (at least some reasons for) resistance to the notion of cellular-scale memory, both sociological and semantic…]

… In neuroscience, [biochemist and neuroscientist Nikolay] Kukushkin writes, the most common definition of memory is that it’s what remains after experience to change future behavior. This is a behavioral definition; the only way to measure it is to observe that future behavior. Think of S. roeselii snapping back into its holdfast, or a lab rat freezing up at the sight of an electrified maze it’s tangled with before. In these cases, how an organism reacts is a clue that prior experience left a lingering trace.

But is a memory only a memory when it’s associated with an external behavior? “It seems like an arbitrary thing to decide,” Kukushkin said. “I understand why it was historically decided to be that, because [behavior] is the thing you can measure easily when you’re working with an animal. I think what happened is that behavior started as something that you could measure, and then it ended up being the definition of memory.”

Behavior tells us that a memory has formed but says nothing about why, how or where. Further, it’s constrained by scale. Take Aplysia californica, a muscular sea slug with enormous neurons (the largest is about the size of a letter on a U.S. penny). Neuroscientists love to conduct memory experiments on Aplysia because its physical responses are easy to measure — poke it and it flinches — and they map cleanly to the handful of sensory and motor neurons involved.

The sea slug, Kukushkin said, can complicate neuroscience’s behavioral bias. Say you shock its tail, triggering a defensive reflex. If you shock it again the next day and find that the defensive reflex is stronger than it was before, that’s behavioral evidence that the slug remembers its initial shock. Any neuroscientist would associate it with a memory.

But what if (apologies to the squeamish) you take that sea slug apart and leave only its immobile neurons? Unlike the intact creature, the neurons can’t retract, so there will be no visible response. Is the memory gone? Certainly not, but without external validation, a behavioral definition of memory breaks down. “We no longer call that a memory,” Kukushkin said. “We call that a mechanism for a memory, we call that synaptic change underlying memory, we call that an analogue of memory. But we don’t call that a memory, and I think that it’s arbitrary.”

Perhaps a definition of memory should extend beyond behavior to encompass more records of the past. A vaccination is a kind of memory. So is a scar, a child, a book. “If you make a footprint, it’s a memory,” Gershman said. An interpretation of memory as a physical event — as a mark made on the world, or on the self — would encompass the biochemical changes that occur within a cell. “Biological systems have evolved to harness those physical processes that retain information and use them for their own purposes,” [cognitive scientist Sam] Gershman said.

So, what does a cell know of itself? Perhaps a better version of Barbara McClintock’s question is: What can a cell remember? When it comes to survival, what a cell knows of itself isn’t as important as what it knows of the world: how it incorporates information about its experiences to determine when to bend, when to battle and when to make a break for it.

A cell preserves the information that preserves its existence. And in a sense, so do we. As today’s cellular memory researchers revisit abandoned experimental threads from the past, they too are discovering what memory owes to its context, how science’s sociological environment can determine which ideas are preserved and which are forgotten. It’s almost as though a field is waking up from a 50-year spell of amnesia. Fortunately, the memories are flooding back…

What Can a Cell Remember?” from @theuniverse.bsky.social‬ in @quantamagazine.bsky.social‬.

For more on the work that got Barbara McClintock onto the Nobel podium see here.

And, also apposite, a pair of cautionary historical examples of scientists who followed Jean-Baptiste Lamarck, who argued in the early 19th century that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime– that’s to say that learning (a kind of memory) is heritable… and went astray: Lysenko and Kammerer.

* James Gleick, The Information

###

As we muse on memory (and note that one cannot remember– and learn from– what one cannot know), we might recall that it was on this date in 1735 that New York Weekly Journal publisher and writer John Peter Zenger was acquitted of seditious libel against the royal governor of New York, William Cosby, on the basis that what he had published was true.

In 1733, Zenger had begun printing The New York Weekly Journal, voicing opinions critical of the colonial governor.  On November 17, 1734, on Cosby’s orders, the sheriff arrested Zenger. After a grand jury refused to indict him, the Attorney General Richard Bradley charged him with libel. Zenger’s lawyers, Andrew Hamilton and William Smith, Sr., successfully argued that truth is a defense against charges of libel… and Zenger became a symbol for freedom of the press.

An illustration depicting a courtroom scene with a judge, lawyers, and an audience, capturing the atmosphere of a historical trial.
Andrew Hamilton defending John Peter Zenger in court, 1734–1735 (source)

“‘Life’ is of course a misnomer, since viruses, lacking the ability to eat or respire, are officially dead”*…

The human genome contains billions of pieces of information and around 22,000 genes, but not all of it is, strictly speaking, human. Eight percent of our DNA consists of remnants of ancient viruses, and another 40 percent is made up of repetitive strings of genetic letters that is also thought to have a viral origin. Those extensive viral regions are much more than evolutionary relics: They may be deeply involved with a wide range of diseases including multiple sclerosis, hemophilia, and amyotrophic lateral sclerosis (ALS), along with certain types of dementia and cancer.

For many years, biologists had little understanding of how that connection worked—so little that they came to refer to the viral part of our DNA as dark matter within the genome. “They just meant they didn’t know what it was or what it did,” explains Molly Gale Hammell, an associate professor at Cold Spring Harbor Laboratory. It became evident that the virus-related sections of the genetic code do not participate in the normal construction and regulation of the body. But in that case, how do they contribute to disease?

An early clue came from the pioneering geneticist Barbara McClintock, who spent much of her career at CSHL. In the 1940s, long before the decoding of the human genome, she realized that some stretches of our DNA behave like infectious invaders. These DNA chunks can move around through the genome, copying and pasting themselves wherever they see fit, which inspired McClintock to call them “jumping genes.” Her once-controversial idea earned her a Nobel Prize in 1983.

Geneticists have since determined that jumping genes originate in the viral portion of the genome. Many of these genes turn out to be benign or even helpful. “But some of the things are full-on parasites,” Hammell says, like infections embedded within our own DNA. All it takes to set these bad actors loose, she is finding, is a slip-up in the body’s mechanisms that normally prevent the genes from jumping around and causing harm…

Half of your genome started out as an infection; if left unchecked, some parts of it can turn deadly all over again: “The Non-Human Living Inside of You.”

See also: “The Wisdom of Pandemics– viruses are active agents, existing within rich lifeworlds. A safe future depends on understanding this evolutionary story.”

* “‘Life’ is of course a misnomer, since viruses, lacking the ability to eat or respire, are officially dead, which is in itself intriguing, showing as it does that the habit of predation can be taken up by clusters of molecules that are in no way alive.” – Barbara Ehrenreich, Living with a Wild God: A Nonbeliever’s Search for the Truth about Everything

###

As we check our baggage, we might send reforming birthday greetings to Abraham Flexner; he was born on this date in 1866.  The founding director of Princeton’s Institute for Advanced Studies, Flexner is best remembered for his pioneering work as a reformer of American higher education, especially medical education.  On the heels of his 1908 study, The American College, in which he effectively critiqued the university lecture as a method of instruction, he published the Flexner Report, which examined the state of American medical education and led to far-reaching reform in the training of doctors.  The report called on American medical schools to enact higher admission and graduation standards, and to adhere strictly to the protocols of mainstream science in their teaching and research.  While one unintended consequence of Flexner’s impactful advocacy was the reversion of American universities to male-only admittance programs to accommodate a smaller admission pool (female admissions picked up again only later the century), most historians agree with his biographer, Thomas Bonner, that Flexner was “the severest critic and the best friend American medicine ever had.”

 source