(Roughly) Daily

Posts Tagged ‘reasoning

“The first principle is that you must not fool yourself – and you are the easiest person to fool”*…

Close-up of multiple petri dishes filled with reddish liquid, reflecting a scientist's face. The background is softly blurred, emphasizing the petri dishes.

We live in a time when a growing number of “authorities” in the U.S. and around the world are actively trading fact for convenient fiction. Science is under attack; there’s (all-too-grounded) concern that we may be headed into a new “Dark Age.”

C. Brandon Ogbunu pushes back, arguing that science– and more particularly, the emerging research field of metascience, a form of scientific self-examination– is essential for navigating our uncertain future…

On May 24, Vice President J.D. Vance authored a post on X that highlighted a “reproducibility crisis” in the sciences. Vance offered this amid a series of other critiques of higher education to justify the withholding of federal science funding to universities over the past several months. His post was timed to accompany a White House executive order that invoked the language of open science to introduce sweeping changes to our federal scientific infrastructure. It came just weeks after the release of plans to cut science funding in the 2026 fiscal year budget.

The playbook is standard: Fuse an aggressive political agenda to a more palatable set of criticisms. In this case, many agree that processes within professional science have, for decades, had significant flaws. In my view, politicians in power are using this as a justification to burn it down. And outside of a few higher-education legal efforts to fight back, the scientific community remains shell-shocked, unable to gather the momentum to resist effectively.

But in addition to resisting the changes, there might be other ways that we can navigate an uncertain future. In recent years, a field called “metascience” (often referred to as “the science of science”) has emerged, charged with understanding the processes of science, how it operates, and identifying themes in what is produced. I argue that this area is going to be essential moving forward in stormy times, as it can dispel the myth that science is an ideological leviathan incapable of self-reflection and can help us rebuild science into a craft that interrogates its fragilities.

As described in a 2018 review, the science of science “is based on a transdisciplinary approach that uses large data sets to study the mechanisms underlying the doing of science—from the choice of a research problem to career trajectories and progress within a field.” It asks questions about aspects of the scientific enterprise, including employment, publishing trends, economic incentives, merit, and other forces that influence science in ways that may escape our intuition…

[Ogbunu explains metascience, and explores examples of work-to-date and questions like: Who is doing science? What are their incentives (and how do they shape behavior)? How innovative is science? He reminds us that “metascientists” are following in the footsteps of humanists and social scientists (Bruno Latour, for example) have examined science practice for many decades…]

… metascience offers a lens that is especially important at this critical moment. Support for science in the face of attacks is critical and necessary. But ironically, one of the best ways to defend the craft might be for scientists to identify the fragilities before the enemy does. We can use data and models, not solely our op-ed voices and social media timelines (though all can be useful). The field is already disabusing us of the notion that science as practiced is based on defensible incentives, neutrality of any kind, or merit, however defined.

Instead, it operates on what looks more like a runaway Matthew Effect, whereby the most established scientists benefit disproportionately from the system of reward — and thus the rich get richer. And the problem isn’t that the flaws exist, but that science’s practitioners aren’t interested in a critical lens towards them.

Metascience won’t fix our problems, but it formalizes ways that we can use to reflect, which may implore us to change science for the better…

Physicians (and other scientists) healing themselves: “Metascience Is More Important Now Than Ever,” from @cbo.bsky.social in @undark.org.

Richard Feynman

###

As we commit to comprehension, we might send insightful birthday greetings to a forefather of metascience, Charles Sanders Peirce; he was born on this date in 1839. A scientist, mathematician, logician, and philosopher, he was (per philosopher Paul Weiss) “the most original and versatile of America’s philosophers and America’s greatest logician”. Bertrand Russell wrote “he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever.” He is considered by many to be “the father of pragmatism“; he helped formalize the field of statistics; and his contributions logic were foundational– helping to found semiotics (the study of signs).

For Peirce, logic encompassed much of what is now called epistemology and the philosophy of science. Peirce approached science as a practice, defining the concept of abductive reasoning to explain scientific advance, as well as rigorously formulating mathematical induction and deductive reasoning.

Black and white portrait of Charles Sanders Peirce, featuring a man with a prominent beard, wearing a dark suit and patterned tie, with a serious expression.

source

“Criticism may not be agreeable, but it is necessary. It fulfills the same function as pain in the human body. It calls attention to an unhealthy state of things.”*…

The estimable Henry Farrell on why, on average, we’re better at criticizing others than thinking originally ourselves…

… our individual reasoning processes are biased in ways that are really hard for us (individually) to correct. We have a strong tendency to believe our own bullshit. The upside is that if we are far better at detecting bullshit in others than in ourselves, and if we have some minimal good faith commitment to making good criticisms, and entertaining good criticisms when we get them, we can harness our individual cognitive biases through appropriate group processes to produce socially beneficial ends. Our ability to see the motes in others’ eyes while ignoring the beams in our own can be put to good work, when we criticize others and force them to improve their arguments. There are strong benefits to collective institutions that underpin a cognitive division of labor.

This superficially looks to resemble the ‘overcoming bias’/’not wrong’ approaches to self-improvement that are popular on the Internet. But it ends up going in a very different direction: collective processes of improvement rather than individual efforts to remedy the irremediable. The ideal of the individual seeking to eliminate all sources of bias so that he (it is, usually, a he) can calmly consider everything from a neutral and dispassionate perspective is replaced by a Humean recognition that reason cannot readily be separated from the desires of the reasoner. We need negative criticisms from others, since they lead us to understand weaknesses in our arguments that we are incapable of coming at ourselves, unless they are pointed out to us…

… It’s not about a radical individual virtuosity, but a radical individual humility. Your most truthful contributions to collective reasoning are unlikely to be your own individual arguments, but your useful criticisms of others’ rationales. Even more pungently, you are on average best able to contribute to collective understanding through your criticisms of those whose perspectives are most different to your own, and hence very likely those you most strongly disagree with. The very best thing that you may do in your life is create a speck of intense irritation for someone whose views you vigorously dispute, around which a pearl of new intelligence may then accrete…

… One of my favourite passages from anywhere is the closing of Middlemarch, where Eliot says of Dorothea:

“Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.”

Striving to be a Dorothea is a noble vocation, and likely the best we can hope for in any event; sooner or later, we will all be forgotten. In the long course of time, all of our arguments and ideas will be broken down and decomposed. At best we may hope, if we are very lucky, that they will contribute in some minute way to a rich humus, from which plants that we will never see or understand might spring.

Eminently worth reading in full: “In praise of negativity,” from @henryfarrell.

* Winston Churchill

###

As we contemplate the constructive, we might recall that it was on this date in 1871 that a discipline wholly dependent on incorporating corrective critique into its methods was founded:  Cleveland Abbe became the founding chief scientist– effectively the head– of the newly formed U.S. Weather Service (later named the Weather Bureau; later still, the National Weather Service). 

Abbe had started the first private weather reporting and warning service (in Cincinnati) and had been issuing weather reports or bulletins since 1869 and was the only person in the country at the time who was experienced in drawing weather maps from telegraphic reports and forecasting from them. The first U.S. meteorologist, he is known as the “father of the U.S. Weather Bureau,” where he systemized observation, trained personnel, and established scientific methods.  He went on to become one of the 33 founders of the National Geographic Society.

source

“A prudent question is one-half of wisdom”*…

Sir Francis Bacon, portrait by Paul van Somer I, 1617

The death of Queen Elizabeth I created a career opportunity for philosopher and statesman Francis Bacon– one that, as Susan Wise Bauer explains– led him to found empiricism, to pioneer inductive reasoning, and in so doing, to advance the scientific method…

In 1603, Francis Bacon, London born, was forty-three years old: a trained lawyer and amateur philosopher, happily married, politically ambitious, perpetually in debt.

He had served Elizabeth I of England loyally at court, without a great deal of recognition in return. But now Elizabeth was dead at the age of sixty-nine, and her crown would go to her first cousin twice removed: James VI of Scotland, James I of England.

Francis Bacon hoped for better things from the new king, but at the moment he had no particular ‘in’ at the English court. Forced to be patient, he began working on a philosophical project he’d had in mind for some years–a study of human knowledge that he intended to call Of the Proficience and Advancement of Learning, Divine and Human.

Like most of Bacon’s undertakings, the project was ridiculously ambitious. He set out to classify all learning into the proper branches and lay out all of the possible impediments to understanding. Part I condemned what he called the three ‘distempers’ of learning, which included ‘vain imaginations,’ pursuits such as astrology and alchemy that had no basis in actual fact; Part II divided all knowledge into three branches and suggested that natural philosophy should occupy the prime spot. Science, the project of understanding the universe, was the most important pursuit man could undertake. The study of history (‘everything that has happened’) and poesy (imaginative writings) took definite second and third places.

For a time, Bacon didn’t expand on these ideas. The Advancement of Learning opened with a fulsome dedication to James I (‘I have been touched–yea, and possessed–with an extreme wonder at those your virtues and faculties . . . the largeness of your capacity, the faithfulness of your memory, the swiftness of your apprehension, the penetration of your judgment, and the facility and order of your elocution …. There hath not been since Christ’s time any king or temporal monarch which hath been so learned in all literature and erudition, divine and human’), and this groveling soon yielded fruit. In 1607 Bacon was appointed as solicitor general, a position he had coveted for years, and over the next decade or so he poured his energies into his government responsibilities.

He did not return to natural philosophy until after his appointment to the even higher post of chancellor in 1618. Now that he had battled his way to the top of the political dirt pile, he announced his intentions to write a work with even greater scope–a new, complete system of philosophy that would shape the minds of men and guide them into new truths. He called this masterwork the Great Instauration: the Great Establishment, a whole new way of thinking, laid out in six parts.

Part I, a survey of the existing ‘ancient arts’ of the mind, repeated the arguments of the Advancement of Learning. But Part II, published in 1620 as a stand-alone work, was something entirely different. It was a wholesale challenge to Aristotelian methods, a brand-new ‘doctrine of a more perfect use of reason.’

Aristotelian thinking relies, heavily, on deductive reasoning for ancient logicians and philosophers, the highest and best road to the truth. Deductive reasoning moves from general statements (premises) to specific conclusions.

MAJOR PREMISE: All heavy matter falls toward the center of the universe. MINOR PREMISE: The earth is made of heavy matter. MINOR PREMISE: The earth is not falling. CONCLUSION: The earth must already be at the center of the universe.

But Bacon had come to believe that deductive reasoning was a dead end that distorted evidence: ‘Having first determined the question according to his will,’ he objected, ‘man then resorts to experience, and bending her to conformity with his placets [expressions of assent], leads her about like a captive in a procession.’ Instead, he argued, the careful thinker must reason the other way around: starting from specifics and building toward general conclusions, beginning with particular pieces of evidence and working, inductively, toward broader assertions.

This new way of thinking–inductive reasoning–had three steps to it. The ‘true method’ Bacon explained,

‘first lights the candle, and then by means of the candle shows the way; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms, and from established axioms again new experiments.’

In other words, the natural philosopher must first come up with an idea about how the world works: ‘lighting the candle.’ Second, he must test the idea against physical reality, against ‘experience duly ordered’–both observations of the world around him and carefully designed experiments. Only then, as a last step, should he ‘deduce axioms,’ coming up with a theory that could be claimed to carry truth. 

Hypothesis, experiment, conclusion: Bacon had just traced the outlines of the scientific method…

Francis Bacon and the Scientific Method

An excerpt from The Story of Western Science by @SusanWiseBauer, via the invaluable @delanceyplace.

* Francis Bacon

###

As we embrace empiricism, we might send carefully-transmitted birthday greetings to Augusto Righi; he was born on this date in 1850. A physicist and a pioneer in the study of electromagnetism, he showed that showed that radio waves displayed characteristics of light wave behavior (reflection, refraction, polarization, and interference), with which they shared the electromagnetic spectrum. In 1894 Righi was the first person to generate microwaves.

Righi influenced the young Guglielmo Marconi, the inventor of radio, who visited him at his lab. Indeed, Marconi invented the first practical wireless telegraphy radio transmitters and receivers in 1894 using Righi’s four ball spark oscillator (from Righi’s microwave work) in his transmitters.

source

“It takes something more than intelligence to act intelligently”*…

AI isn’t human, but that doesn’t mean, Nathan Gardels argues (citing three recent essays in Noema, the magazine that he edits), that it cannot be intelligent…

As the authors point out, “the dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data.”

Critics of this approach argue that its “insurmountable wall” is “symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.).”

Such reasoning would enable logical inferences that can apply what has been learned to unprogrammed contingencies, thus “completing patterns” by connecting the dots. LeCun and Browning argue that, as with the evolution of the human mind itself, in time and with manifold experiences, this ability may emerge as well from the neural networks of intelligent machines.

“Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach,” they contend. “They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoning, compositionality, multilingual competency, some logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.”

The philosopher Charles Taylor associates the breakthroughs of consciousness in that era with the arrival of written language. In his view, access to the stored memories of this first cloud technology enabled the interiority of sustained reflection from which symbolic competencies evolved.

This “transcendence” beyond oral narrative myth narrowly grounded in one’s own immediate circumstance and experience gave rise to what the sociologist Robert Bellah called “theoretic culture” — a mental organization of the world at large into the abstraction of symbols. The universalization of abstraction, in turn and over a long period of time, enabled the emergence of systems of thought ranging from monotheistic religions to the scientific reasoning of the Enlightenment.

Not unlike the transition from oral to written culture, might AI be the midwife to the next step of evolution? As has been written in this column before, we have only become aware of climate change through planetary computation that abstractly models the Earthly organism beyond what any of us could conceive out of our own un-encompassing knowledge or direct experience.

For Bratton and Agüera y Arcas, it comes down in the end to language as the “cognitive infrastructure” that can comprehend patterns, referential context and the relationality among them when facing novel events.

“There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real-world referents,” they observe.

As an “executable” translation of human language, code does not produce the same kind of intelligence that emerges from human consciousness, but is intelligence nonetheless. What is most likely to emerge in their view is not “artificial” intelligence when machines become more human, but “synthetic” intelligence, which fuses both.

As AI further develops through human prompt or a capacity to guide its own evolution by acquiring a sense of itself in the world, what is clear is that it is well on the way to taking its place alongside, perhaps conjoining and becoming synthesized with, other intelligences, from homo sapiens to insects to forests to the planetary organism itself…

AI takes its place among and may conjoin with other multiple intelligences: “Cognizant Machines: A What Is Not A Who.” Eminentl worth reading in full both the linked essay and the articles referenced in it.

* Dostoyevsky, Crime and Punishment

###

As we make room for company, we might recall that it was on this date in 1911 that a telegraph operator in the 7th floor of The New York Times headquarters in Times Square sent a message– “This message sent around the world”– that left at 7:00p, traveled over 28,000 miles, and was relayed by 16 different operators. It arrived back at the Times only 16.5 minutes later.

The “around the world telegraphy” record had been set in 1903, when President Roosevelt celebrated the completion of the Commercial Pacific Cable by sending the first round-the-world message in just 9 minutes. But that message had been given priority status; the Times wanted to see how long a regular message would take — and what route it would follow.

The building from which the message originated is now called One Times Square and is best known as the site of the New Year’s Eve ball drop.

source

Written by (Roughly) Daily

August 20, 2022 at 1:00 am

“No law of nature, however general, has been established all at once; its recognition has always been preceded by many presentiments.”*…

Laws of nature are impossible to break, and nearly as difficult to define. Just what kind of necessity do they possess?

… The natural laws limit what can happen. They are stronger than the laws of any country because it is impossible to violate them. If it is a law of nature that, for example, no object can be accelerated from rest to beyond the speed of light, then it is not merely that such accelerations never occur. They cannot occur.

There are many things that never actually happen but could have happened in that their occurrence would violate no law of nature. For instance, to borrow an example from the philosopher Hans Reichenbach (1891-1953), perhaps in the entire history of the Universe there never was nor ever will be a gold cube larger than one mile on each side. Such a large gold cube is not impossible. It just turns out never to exist. It’s like a sequence of moves that is permitted by the rules of chess but never takes place in the entire history of chess-playing. By contrast, if it is a law of nature that energy is never created or destroyed, then it is impossible for the total energy in the Universe to change. The laws of nature govern the world like the rules of chess determine what is permitted and what is forbidden during a game of chess, in an analogy drawn by the biologist T H Huxley (1825-95).

Laws of nature differ from one another in many respects. Some laws concern the general structure of spacetime, while others concern some specific inhabitant of spacetime (such as the law that gold doesn’t rust). Some laws relate causes to their effects (as Coulomb’s law relates electric charges to the electric forces they cause). But other laws (such as the law of energy conservation or the spacetime symmetry principles) do not specify the effects of any particular sort of cause. Some laws involve probabilities (such as the law specifying the half-life of some radioactive isotope). And some laws are currently undiscovered – though I can’t give you an example of one of those! (By ‘laws of nature’, I will mean the genuine laws of nature that science aims to discover, not whatever scientists currently believe to be laws of nature.)

What all of the various laws have in common, despite their diversity, is that it is necessary that everything obey them. It is impossible for them to be broken. An object must obey the laws of nature…

But although all these truisms about the laws of nature sound plausible and familiar, they are also imprecise and metaphorical. The natural laws obviously do not ‘govern’ the Universe in the way that the rules of chess govern a game of chess. Chess players know the rules and so deliberately conform to them, whereas inanimate objects do not know the laws of nature and have no intentions.

Scientists discover laws of nature by acquiring evidence that some apparent regularity is not only never violated but also could never have been violated. For instance, when every ingenious effort to create a perpetual-motion machine turned out to fail, scientists concluded that such a machine was impossible – that energy conservation is a natural law, a rule of nature’s game rather than an accident. In drawing this conclusion, scientists adopted various counterfactual conditionals, such as that, even if they had tried a different scheme, they would have failed to create a perpetual-motion machine. That it is impossible to create such a machine (because energy conservation is a law of nature) explains why scientists failed every time they tried to create one.

Laws of nature are important scientific discoveries. Their counterfactual resilience enables them to tell us about what would have happened under a wide range of hypothetical circumstances. Their necessity means that they impose limits on what is possible. Laws of nature can explain why something failed to happen by revealing that it cannot happen – that it is impossible.

We began with several vague ideas that seem implicit in scientific reasoning: that the laws of nature are important to discover, that they help us to explain why things happen, and that they are impossible to break. Now we can look back and see that we have made these vague ideas more precise and rigorous. In doing so, we found that these ideas are not only vindicated, but also deeply interconnected. We now understand better what laws of nature are and why they are able to play the roles that science calls upon them to play.

What is a Law of Nature?,” Marc Lange explains in @aeonmag.

* Dmitri Mendeleev (creator of the Periodic Table)

###

As we study law, we might send inquisitive birthday greetings to Federico Cesi; he was born on this date in 1585. A scientist and naturalist, he is best remembered as the founder of the Accademia dei Lincei (Lincean Academy), often cited as the first modern scientific society. Cesi coined (or at least was first to publish/disseminate) the word “telescope” to denote the instrument used by Galileo– who was the sixth member of the Lincean Academy.

source