Posts Tagged ‘history of science’
“Good night, sleep tight, don’t let the bedbugs bite”*…
As Rodrigo Pérez Ortega reports, that admonition has a very long history…
Long before rats roamed sewers and cockroaches lurked in kitchen corners, another unwelcome guest plagued early civilizations. A new genomic study published today in Biology Letters suggests that bedbugs—the blood-feeding insects that haunt our hotel stays—were the first urban pests, proving an itchy menace for tens of thousands of years.
“This is really amazing,” says Klaus Reinhardt, an evolutionary biologist at the Dresden University of Technology who was not involved in the new study. “I think the hypothesis is quite solid.” Still, other researchers quibble over whether bedbugs can indisputably claim that title.
Many species of bedbugs depend on us—and our blood—to survive, but long ago, their prey of choice was probably exclusively bats. Genetic evidence suggests that about 245,000 years ago, some bedbugs made the jump to early humans.
This split led to two genetically distinct bedbug lineages. One kept feeding on bats and today remains largely confined to caves and natural habitats in Europe and the Middle East. The other followed humans into modern dwellings. Exactly how that scenario played out remained a mystery, however. That’s why Warren Booth, an evolutionary biologist at the Virginia Polytechnic Institute and State University, and his team set out to study the genome of the common bedbug (Cimex lectularius) in depth…
… [Their findings make] bedbugs strong contenders for the title of the world’s first true urban pest that relies solely on humans, the researchers claim. Unlike more recent urban interlopers that feast on our stored food and enjoy our cozy homes—like the German cockroach (Blattella germanica), which formed a close association with humans just 2000 years ago, or the black rat (Rattus rattus), whose commensal relationship began about 5000 years ago—bedbugs may have started parasitizing humans just as our ancestors started building permanent settlements…
… the new findings underscore how humans have shaped the evolution of urban insects. Compared with their bat-feeding cousins, human-feeding bedbugs are smaller, less hairy, and have larger limbs—adaptations likely suited to navigating smooth walls and synthetic bedding. Today’s bedbugs also carry many DNA mutations linked to insecticide resistance, a relatively recent trait that reflects the pressures of modern pest control. “They’re a remarkable yet horrible species,” Booth says.
Understanding how these pests evolved together with us could help improve strategies for controlling them, especially as cities continue to grow—and as bedbugs now feed on the poultry we raise. Further research could also help us understand how our own immune system evolved, since some people develop allergies for bedbug bites. As a start, Booth and his team are analyzing centuries-old bedbug specimens in museums, to track how the insects’ genomes—and populations—have evolved over the past century alongside us.
“There’s a pretty intimate association, whether we like it or not,” Booth says. “That’s not going away anytime soon.”…
“Bedbugs may be the first urban pest,” from @rpocisv.bsky.social in @science.org.
* common children’s rhyme
###
As we contemplate the chronicle of a co-evolved curse, we might recall that it was on thus date in 1789 that Richard Kirwan published his essay in support of the phlogiston theory (the belief, that dates to alchemical times, in the existence of a fire-like element (dubbed “phlogiston”) contained within combustible bodies and released during burning. Kirwan was among the last of its advocates.
A well-regarded scientist in the late 18th and early 19th centuries, Kirwan met and corresponded with Black, Lavoisier, Priestley, and Cavendish. Indeed, while scientific history remembers him as a defender of an incorrect theory, his work probably spurred Priestley and Lavoisier, who respectively discovered and named the actual elemental agent of combustion, oxygen.
But Kirwan is also remembered for a personal eccentricity (one of many) relevant to this post: he hated bugs (especially flies). He paid his servant a bounty for each one they killed.
“There is only one world, the natural world, exhibiting patterns we call the ‘laws of nature’”*…

The quote above (in full, below) is the reigning substantive understanding of scientific naturalism that is commonplace today. Indeed, the modern era is often seen as the triumph of science over supernaturalism. But, as Peter Harrison explains, what really happened is far more interesting…
By any measure, the scientific revolution of the 17th century was a significant milestone in the emergence of our modern secular age. This remarkable historical moment is often understood as science finally liberating itself from the strictures of medieval religion, striking out on a new path that eschewed theological explanations and focused its attentions solely on a disenchanted, natural world. But this version of events is, at best, half true.
Medieval science, broadly speaking, had followed Aristotle in seeking explanations in terms of the inherent causal properties of natural things. God was certainly involved, at least to the extent that he had originally invested things with their natural properties and was said to ‘concur’ with their usual operations. Yet the natural world had its own agency. Beginning in the 17th century, the French philosopher and scientist René Descartes and his fellow intellectual revolutionaries dispensed with the idea of internal powers and virtues. They divested natural objects of inherent causal powers and attributed all motion and change in the universe directly to natural laws.
But, for all their transformative influence, key agents in the scientific revolution such as Descartes, Johannes Kepler, Robert Boyle and Isaac Newton are not our modern and secular forebears. They did not share our contemporary understandings of the natural or our idea of ‘laws of nature’ that we imagine underpins that naturalism…
[Harrison traces the history of the often contentious, but ultimately momentous rise of naturalism, then considers the historical acounts of that ascension– and what they gloss over or miss altogether. He then turns to whay that matters…]
… the contrived histories of naturalism that purport to show its victory over supernaturalism were fabricated in the 19th century and are simply not consistent with the historical evidence. They are also tainted by a cultural condescension that, in the past at least, descended into outright racism. Few, if any, would today endorse the chauvinism that attends these older, triumphalist accounts of the history of naturalism. Yet, it is worth reflecting upon the extent to which elements of cultural condescension necessarily colour scholarly endeavours that are premised on the imagined ‘neutral’ grounds of naturalism. Careful consideration of the contingent historical circumstances that gave rise to present analytic categories that enjoy significant standing and authority would suggest that there is nothing especially neutral or objective about them. Any clear-eyed crosscultural comparison – one that refrains from assessing worldviews in terms of how they measure up to the standard of the modern West – will reinforce this. We might go so far as to adopt a form of ‘reverse anthropology’, where we think how our own conceptions of the world might look if we adopted the frameworks of others. This might entail dispensing with the idea of the supernatural, and attempting to think outside the box of our recently inherited natural/supernatural distinction.
History [that is, the “actual” history that Harrison recounts] suggests that our regnant modern naturalism is deeply indebted to monotheism, and that its adherents may need to abandon the comforting idea that their naturalistic commitments are licensed by the success of science. As for the idea of the supernatural, ironically this turns out to be far more important for the identity of those who wish to deny its reality than it had ever been for traditional religious believers…
Fascinating and provocative: “The birth of naturalism,” from @uqpharri in @aeonmag.
* “There is only one world, the natural world, exhibiting patterns we call the ‘laws of nature’, and which is discoverable by the methods of science and empirical investigation. There is no separate realm of the supernatural, spiritual, or divine; nor is there any cosmic teleology or transcendent purpose inherent in the nature of the universe or in human life.” – Sean Carroll, The Big Picture
###
As we rethink reality, we might recall that it was on this date in 1588 that Tycho Brahe first outlined his “Tychonic system” concept of the structure of the solar system. The Tychonic system was a hybrid, sharing both the basic idea of the geocentric system of Ptolemy, and the heliocentric idea of Nicholas Copernicus. Published in his De mundi aethorei recentioribus phaenomenis, Tycho’s proposal, retaining Aristotelian physics, kept the the Sun and Moon revolving about Earth in the center of the universe and, at a great distance, the shell of the fixed stars was centered on the Earth. But like Copernicus, he agreed that Mercury, Venus, Mars, Jupiter, and Saturn revolved about the Sun. Thus he could explain the motions of the heavens without “crystal spheres” carrying the planets through complex Ptolemaic epicycles.

On this same date, in 1633, Galileo Galilei arrived in Rome to face trial before the Inquisition. His crime was professing the belief that the earth revolves around the sun– based on observations that he’d made further to Copernicus and Tycho.

“The world of reality has its limits; the world of imagination is boundless”*…
Still, it’s useful to know the difference… and as Yasemin Saplakoglu explains, that’s a complex process– one that science takes very seriously…
As I sit at my desk typing up this newsletter, I can see a plant to my left, a water bottle to my right and a gorilla sitting across from me. The plant and bottle are real, but the gorilla is a product of my mind — and I intuitively know that this is true. That’s because my brain, like most people’s, has the ability to distinguish reality from imagination. If it didn’t, or if I had a condition that disrupts this distinction, I’d constantly see gorillas and elephants where they don’t exist.
Imagination is sometimes described as perception in reverse. When we look at an object, electromagnetic waves enter the eyes, where they are translated into neural signals that are then sent to the visual cortex at the back of the brain. This process generates an image: “plant.” With imagination, we start with what we want to see, and the brain’s memory and semantic centers send signals to the same brain region: “gorilla.”
In both cases, the visual cortex is activated. Recalling memories can also activate some of the same regions. Yet the brain can clearly distinguish between imagination, perception and memory in most cases (though it is still possible to get confused). How does it keep everything straight?
By probing the differences between these processes, neuroscientists are untangling how the human brain creates our experience. They’re finding that even our perception of reality is in many ways imagined. “Underneath our skull, everything is made up,” Lars Muckli, a professor of visual and cognitive neurosciences at the University of Glasgow, told me. “We entirely construct the world in its richness and detail and color and sound and content and excitement. … It is created by our neurons.”
To distinguish reality and imagination, the brain might have some kind of “reality threshold,” according to one theory. Researchers recently tested this by asking people to imagine specific images against a backdrop — and then secretly projected faint outlines of those images there. Participants typically recognized when they saw a real projection versus their imagined one, and those who rated images as more vivid were also more likely to identify them as real. The study suggested that when processing images, the brain might make a judgment on reality based on signal strength. If the signal is weak, the brain takes it for imagination. If it’s strong, the brain deems it real. “The brain has this really careful balancing act that it has to perform,” Thomas Naselaris, a neuroscientist at the University of Minnesota, told me. “In some sense it is going to interpret mental imagery as literally as it does visual imagery.”
Although recalling memories is a creative and imaginative process, it activates the visual cortex as if we were seeing. “It started to raise the question of whether a memory representation is actually different from a perceptual representation at all,” Sam Ling, a neuroscientist at Boston University, told me. A recent study looked to identify how memories and perceptions are constructed differently at the neurobiological level. When we perceive something, visual cues undergo layers of processing in the visual cortex that increase in complexity. Neurons in earlier parts of this process fire more precisely than those that get involved later. In the study, researchers found that during memory recall, neurons fired in a much blurrier way through all the layers. That might explain why our memories aren’t often as crisp as what we’re seeing in front of us…
“How Do Brains Tell Reality From Imagination?” from @yaseminsaplakoglu.bsky.social in @quantamagazine.bsky.social.
* Jean-Jacques Rousseau
###
As we parse perception, we might send mindful birthday greetings to a man whose work figures into the history of science’s struggle on this issue, Franz Brentano; he was born on this date in 1838. A philosopher and psychologist, his 1874 Psychology from an Empirical Standpoint, considered his magnum opus and is credited with having reintroduced the medieval scholastic concept of intentionality into contemporary philosophy and psychology.
Brentano also studied perception, with conclusions that prefigure the discussion above…
He is also well known for claiming that Wahrnehmung ist Falschnehmung (‘perception is misconception’) that is to say perception is erroneous. In fact he maintained that external, sensory perception could not tell us anything about the de facto existence of the perceived world, which could simply be illusion. However, we can be absolutely sure of our internal perception. When I hear a tone, I cannot be completely sure that there is a tone in the real world, but I am absolutely certain that I do hear. This awareness, of the fact that I hear, is called internal perception. External perception, sensory perception, can only yield hypotheses about the perceived world, but not truth. Hence he and many of his pupils (in particular Carl Stumpf and Edmund Husserl) thought that the natural sciences could only yield hypotheses and never universal, absolute truths as in pure logic or mathematics.
However, in a reprinting of his Psychologie vom Empirischen Standpunkte (Psychology from an Empirical Standpoint), he recanted this previous view. He attempted to do so without reworking the previous arguments within that work, but it has been said that he was wholly unsuccessful. The new view states that when we hear a sound, we hear something from the external world; there are no physical phenomena of internal perception… – source
“Faith is a fine invention / When gentlemen can see, / But microscopes are prudent / In an emergency”*…
Microscopy has been around for centuries; it began to emerge as a field of scientific investigation with the emergence of compound microscopes in Europe around 1620. Antonie van Leeuwenhoek developed a very high magnification simple microscope in the 1670s and is often considered to be the first acknowledged microscopist and microbiologist. His fellow pioneer, Robert Hooke (author of what many to believe to have been van Leeuwenhoek’s inspiration, the ground-breaking Micrographia, published in in 1665), wrote “By the help of microscopes, there is nothing so small, as to escape our inquiry; hence there is a new visible world discovered to the understanding.”
Optical microscopes remain central tools in science, and have been joined by optical, electron, and scanning probe microscopes (along with the emerging field of X-ray microscopy). But, as Joao Inacio Silva illustrates, they are also fascinating objects, things of beauty…
Antique microscopes are amazing scientific instruments, from times when craftsmanship was as important as functionality and performance. The beauty of these instruments is manifested in countless ways, including the history of their makers and their technological developments, and their contribution to the development of microbiology and other fields of science, and all combine to inspire a feeling of admiration that a microscope can be so beautiful, elegant, and functional after so many years. An antique microscope is a work of art as well as science.
This [site] describes a collection of microscopes, which started as a hobby some years ago, and is always being updated with interesting new instruments….
For example:
Gustave Moreau (1805 – 1880) was a manufacturer of binoculars operating in Paris from 1830. The business of Moreau was merged with other opticians in 1849, forming the Deraisme house (167 Rue Saint-Maur, Paris), which specialised in binoculars and spotting telescopes, particularly for military use. Moreau is best known for the creation of the famous ‘Monkey Microscope’. [Pictured at the top] Microscope 199 is a drum-like microscope and is engraved with ‘Moreau’ in its inside base… The instrument should be dated to the mid-19th century.
Moritz (M.) Pillischer emigrated from Hungary to London, England, in 1845. He opened an independent shop that produced microscopes and other scientific and mathematical instruments in about 1849. Moritz’s nephew, Jacob (who adopted the name “James”), moved to London around 1860 to work for his uncle. Jacob later became Moritz’s son-in-law, after marrying one of his daughters. Pillischer did not make his own lenses until 1854, but instead provided French-made objectives with his instruments. Moritz Pillischer was elected as a Fellow of the Royal Microscopical Society in 1855 and joined the Quekett Microscopical Club in 1869. By 1881, Moritz had moved to Hove, Sussex, although he retained ownership of the Pillischer optical business. He handed over ownership of the business to Jacob in 1887 and passed away in his Sussex home in 1893. Jacob joined the Quekett Microscopical in 1895, and the Royal Microscopical Society in 1898. After Jacobs’ death in 1930, the company was inherited by Jacob’s three children, Edward, Leopold, and Bertha, and the business was liquidated in 1947. Microscope 17 is a version of Pillischer’s Student microscope from c. 1860, with the serial number 1011 (Figure 1). The microscope is finished in lacquered brass and has an extendable eyepiece tube, original Pillisher lenses, rack and pinion main focus and fine focus. It has a square stage with manually adjustable slide rest. Below the stage is a mirror and a revolving wheel to control the level of light. Pillischer introduced this version of student microscope in the late 1854, and the basic form of this microscope was then used in other models over the next several decades, including the Saint Thomas Hospital (introduced in 1873) and the International (introduced in 1876) models (Figure 1). The microscope came with its original wooden box and several accessories, including a live box used for the observation of wet or dry animals. Early models of live boxes were constructed of ivory or brass and would often fit into the hole in the stage. Later, they were fitted onto a rectangular brass slide above the stage.
Many, many more delights at the “Microscope Museum“, a glorious collection of antique microscopes and other scientific instruments.
* Emily Dickinson
###
As we look closely, we might spare a thought for Christain Goldbach; he died on this date in 1764. A mathematician, lawyer, and historian who studied infinite sums, the theory of curves, and the theory of equations, he is best remembered for his correspondence with Leibniz, Euler, and Bernoulli, especially his 1742 letter to Euler containing what is now known as “Goldbach’s conjecture.”
In that letter he outlined his famous proposition:
Every even natural number greater than 2 is equal to the sum of two prime numbers.
It has been checked by computer for vast numbers– to more than a trillion trillion– but remains unproved.
(Goldbach made another conjecture that every odd number is the sum of three primes; it has been checked by computer for vast numbers, but also remains unproved.)

Goldbach’s letter to Euler (source, and larger view)
“Those who are not shocked when they first come across quantum theory cannot possibly have understood it”*…
A scheduling note: your correspondent is headed onto the road for a couple of weeks, so (Roughly) Daily will be a lot more roughly than daily until September 20th or so.
100 years ago, a circle of physicists shook the foundation of science. As Philip Ball explains, it’s still trembling…
In 1926, tensions were running high at the Institute for Theoretical Physics in Copenhagen. The institute was established 10 years earlier by the Danish physicist Niels Bohr, who had shaped it into a hothouse for young collaborators to thrash out a new theory of atoms. In 1925, one of Bohr’s protégés, the brilliant and ambitious German physicist Werner Heisenberg, had produced such a theory. But now everyone was arguing with each other about what it implied for the nature of physical reality itself.
To the Copenhagen group, it appeared reality had come undone…
[Ball tells the story of Niels Bohr’s building on Max Planck, of Werner Heisenberg’s wrangling of Bohr’s thought into theory, of Einstein’s objections and Erwin Schrödinger’s competing theory; then he homes in on the ontological issue at stake…]
Quantum mechanics, they said, demanded we throw away the old reality and replace it with something fuzzier, indistinct, and disturbingly subjective. No longer could scientists suppose that they were objectively probing a pre-existing world. Instead, it seemed that the experimenter’s choices determined what was seen—what, in fact, could be considered real at all.
In other words, the world is not simply sitting there, waiting for us to discover all the facts about it. Heisenberg’s uncertainty principle implied that those facts are determined only once we measure them. If we choose to measure an electron’s speed (more strictly, its momentum) precisely, then this becomes a fact about the world—but at the expense of accepting that there are simply no facts about its position. Or vice versa…
…A century later, scientists are still arguing about this issue of what quantum mechanics means for the nature of reality…
[Ball recounts subsequent attempts to reconcile quantum theory to “reality,” including Schrödinger’s wave mechanics…]
… Schrödinger’s wave mechanics didn’t restore the kind of reality he and Einstein wanted. His theory represented all that could be said about a quantum object in the form of a mathematical expression called the wave function, from which one can predict the outcomes of making measurements on the object. The wave function looks much like a regular wave, like sound waves in air or water waves on the sea. But a wave of what?
At first, Schrödinger supposed that the amplitude of the wave—think of it like the height of a water wave—at a given point in space was a measure of the density of the smeared-out quantum particle there. But Born argued that in fact this amplitude (more precisely, the square of the amplitude) is a measure of the probability that we will find the particle there, if we make a measurement of its position.
This so-called Born rule goes to the heart of what makes quantum mechanics so odd. Classical Newtonian mechanics allows us to calculate the trajectory of an object like a baseball or the moon, so that we can say where it will be at some given time. But Schrödinger’s quantum mechanics doesn’t give us anything equivalent to a trajectory for a quantum particle. Rather, it tells us the chance of getting a particular measurement outcome. It seems to point in the opposite direction of other scientific theories: not toward the entity it describes, but toward our observation of it. What if we don’t make a measurement of the particle at all? Does the wave function still tell us the probability of its being at a given point at a given time? No, it says nothing about that—or more properly, it permits us to say nothing about it. It speaks only to the probabilities of measurement outcomes.
Crucially, this means that what we see depends on what and how we measure. There are situations for which quantum mechanics predicts that we will see one outcome if we measure one way, and a different outcome if we measure the same system in a different way. And this is not, as is sometimes implied (this was the cause of Heisenberg’s row with Bohr), because making a measurement disturbs the object in some physical manner, much as we might very slightly disturb the temperature of a solution in a test-tube by sticking a thermometer into it. Rather, it seems to be a fundamental property of nature that the very fact of acquiring information about it induces a change.
If, then, by reality we mean what we can observe of the world (for how can we meaningfully call something real if it can’t be seen, detected, or even inferred in any way?), it is hard to avoid the conclusion that we play an active role in determining what is real—a situation the American physicist John Archibald Wheeler called the “participatory universe.”..
… Heisenberg’s “uncertainty” captured that sense of the ground shifting. It was not the ideal word—Heisenberg himself originally used the German Ungenauigkeit, meaning something closer to “inexactness,” as well as Unbestimmtheit, which might be translated as “undeterminedness.” It was not that one was uncertain about the situation of a quantum object, but that there was nothing to be certain about.
There was an even more disconcerting implication behind the uncertainty principle. The vagueness of quantum phenomena, when an electron in an atom might seem to jump from one energy state to another at a time of its own choosing, seemed to indicate the demise of causality itself. Things happened in the quantum world, but one could not necessarily adduce a reason why. In his 1927 paper on the uncertainty principle, Heisenberg challenged the idea that causes in nature lead to predictable effects. That seemed to undermine the very foundation of science, and it made the world seem like a lawless, somewhat arbitrary place….
… One of Bohr’s most provocative views was that there is a fundamental distinction between the fuzzy, probabilistic quantum world and the classical world of real objects in real places, where measurements of, say, an electron with a macroscopic instrument tell us that it is here and not there.
What Bohr meant is shocking. Reality, he implied, doesn’t consist of objects located in time and space. It consists of “quantum events,” which are obliged to be self-consistent (in the sense that quantum mechanics can describe them accurately) but not classically consistent with one another. One implication of this, as far as we can currently tell, is that two observers can see different and conflicting outcomes from an event—yet both can be right.
But this rigid distinction between the quantum and classical worlds can’t be sustained today. Scientists can now conduct experiments that probe size scales in between those where quantum and classical rules are thought to apply—neither microscopic (the atomic scale) nor macroscopic (the human scale), but mesoscopic (an intermediate size). We can look, for example, at the behavior of nanoparticles that can be seen and manipulated yet are small enough to be governed by quantum rules. Such experiments confirm the view that there is no abrupt boundary of quantum and classical. Quantum effects can still be observed at these intermediate scales if our devices are sensitive enough, but those effects can be harder to discern as the number of particles in the system increases.
To understand such experiments, it’s not necessary to adopt any particular interpretation of quantum mechanics, but merely to apply the standard theory—encompassed within Schrödinger’s wave mechanics, say—more expansively than Bohr and colleagues did, using it to explore what happens to a quantum object as it interacts with its surrounding environment. In this way, physicists are starting to understand how information gets out of a quantum system and into its environment, and how, as it does so, the fuzziness of quantum probabilities morphs into the sharpness of classical measurement. Thanks to such work, it is beginning to seem that our familiar world is just what quantum mechanics looks like when you are 6 feet tall.
But even if we manage to complete that project of uniting the quantum with the classical, we might end up none the wiser about what manner of stuff—what kind of reality—it all arises from. Perhaps one day another deeper theory will tell us. Or maybe the Copenhagen group was right a hundred years ago that we just have to accept a contingent, provisional reality: a world only half-formed until we decide how it will be…
Eminently worth reading in full: “When Reality Came Undone,” from @philipcball in @NautilusMag.
See also: When We Cease to Understand the World, by Benjamin Labatut.
* Niels Bohr
###
As we wrestle with reality, we might spare a thought for Ludwig Boltzmann; he died on this date in 1906. A physicist and philosopher, he is best remembered for the development of statistical mechanics, and the statistical explanation of the second law of thermodynamics (which connected entropy and probability).
Boltzmann helped paved the way for quantum theory both with his development of statistical mechanics (which is a pillar of modern physics) and with his 1877 suggestion that the energy levels of a physical system could be discrete.









You must be logged in to post a comment.