(Roughly) Daily

Posts Tagged ‘medicine

“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.”*…

… Indeed, the same might be said of life itself. David Krakauer and Chris Kempes of the Santa Fe Institute suggest that life is starting to look a lot less like an outcome of chemistry and physics, and more like a computational process…

… Today, doubts about conventional explanations of life are growing and a wave of new general theories has emerged to better define our origins. These suggest that life doesn’t only depend on amino acids, DNA, proteins and other forms of matter. Today, it can be digitally simulated, biologically synthesised or made from entirely different materials to those that allowed our evolutionary ancestors to flourish. These and other possibilities are inviting researchers to ask more fundamental questions: if the materials for life can radically change – like the materials for computation – what stays the same? Are there deeper laws or principles that make life possible?

Our planet appears to be exceptionally rare. Of the thousands that have been identified by astronomers, only one has shown any evidence of life. Earth is, in the words of Carl Sagan, a ‘lonely speck in the great enveloping cosmic dark.’ This apparent loneliness is an ongoing puzzle faced by scientists studying the origin and evolution of life: how is it possible that only one planet has shown incontrovertible evidence of life, even though the laws of physics are shared by all known planets, and the elements in the periodic table can be found across the Universe?

The answer, for many, is to accept that Earth really is as unique as it appears: the absence of life elsewhere in the Universe can be explained by accepting that our planet is physically and chemically unlike the many other planets we have formally identified. Only Earth, so the argument goes, produced the special material conditions conducive to our rare chemistry, and it did so around 4 billion years ago, when life first emerged.

In 1952, Stanley Miller and his supervisor Harold Urey provided the first experimental evidence for this idea through a series of experiments at the University of Chicago. The Miller-Urey experiment, as it became known, sought to recreate the atmospheric conditions of early Earth through laboratory equipment, and to test whether organic compounds (amino acids) could be created in a reconstructed inorganic environment. When their experiment succeeded, the emergence of life became bound to the specific material conditions and chemistry on our planet, billions of years ago.

However, more recent research suggests there are likely countless other possibilities for how life might emerge through potential chemical combinations. As the British chemist Lee Cronin, the American theoretical physicist Sara Walker and others have recently argued, seeking near-miraculous coincidences of chemistry can narrow our ability to find other processes meaningful to life. In fact, most chemical reactions, whether they take place on Earth or elsewhere in the Universe, are not connected to life. Chemistry alone is not enough to identify whether something is alive, which is why researchers seeking the origin of life must use other methods to make accurate judgments.

Today, ‘adaptive function’ is the primary criterion for identifying the right kinds of biotic chemistry that give rise to life, as the theoretical biologist Michael Lachmann (our colleague at the Santa Fe Institute) likes to point out. In the sciences, adaptive function refers to an organism’s capacity to biologically change, evolve or, put another way, solve problems. ‘Problem-solving’ may seem more closely related to the domains of society, culture and technology than to the domain of biology. We might think of the problem of migrating to new islands, which was solved when humans learned to navigate ocean currents, or the problem of plotting trajectories, which our species solved by learning to calculate angles, or even the problem of shelter, which we solved by building homes. But genetic evolution also involves problem-solving. Insect wings solve the ‘problem’ of flight. Optical lenses that focus light solve the ‘problem’ of vision. And the kidneys solve the ‘problem’ of filtering blood. This kind of biological problem-solving – an outcome of natural selection and genetic drift – is conventionally called ‘adaptation’. Though it is crucial to the evolution of life, new research suggests it may also be crucial to the origins of life.

This problem-solving perspective is radically altering our knowledge of the Universe…

The idea of life asa kind of computational process has roots that go back to the 4th century BCE, when Aristotle introduced his philosophy of hylomorphism in which functions take precedence over forms. For Aristotle, abilities such as vision were less about the biological shape and matter of eyes and more about the function of sight. It took around 2,000 years for his idea of hylomorphic functions to evolve into the idea of adaptive traits through the work of Charles Darwin and others. In the 19th century, these naturalists stopped defining organisms by their material components and chemistry, and instead began defining traits by focusing on how organisms adapted and evolved – in other words, how they processed and solved problems. It would then take a further century for the idea of hylomorphic functions to shift into the abstract concept of computation through the work of Alan Turing [and here] and the earlier ideas of Charles Babbage [here].

In the 1930s, Turing became the first to connect the classical Greek idea of function to the modern idea of computation, but his ideas were impossible without the work of Babbage, a century before. Important for Turing was the way Babbage had marked the difference between calculating devices that follow fixed laws of operation, which Babbage called ‘Difference Engines’, and computing devices that follow programmable laws of operation, which he called ‘Analytical Engines.’

Using Babbage’s distinction, Turing developed the most general model of computation: the universal Turing Machine…

Turing did not describe any of the materials out of which such a machine would be built. He had little interest in chemistry beyond the physical requirement that a computer store, read and write bits reliably. That is why, amazingly, this simple (albeit infinite) programmable machine is an abstract model of how our powerful modern computers work. But the theory of computation Turing developed can also be understood as a theory of life. Both computation and life involve a minimal set of algorithms that support adaptive function. These ‘algorithms’ help materials process information, from the rare chemicals that build cells to the silicon semiconductors of modern computers. And so, as some research suggests, a search for life and a search for computation may not be so different. In both cases, we can be side-tracked if we focus on materials, on chemistry, physical environments and conditions.

In response to these concerns, a set of diverse ideas has emerged to explain life anew, through principles and processes shared with computation, rather than the rare chemistry and early Earth environments simulated in the Miller-Urey experiment. What drives these ideas, developed over the past 60 years by researchers working in disparate disciplines – including physics, computer science, astrobiology, synthetic biology, evolutionary science, neuroscience and philosophy – is a search for the fundamental principles that drive problem-solving matter. Though researchers have been working in disconnected fields and their ideas seem incommensurable, we believe there are broad patterns to their research on the origins of life. However, it can be difficult for outsiders to understand how these seemingly incommensurable ideas are connected to each other or why they are significant. This is why we have set out to review and organise these new ways of thinking.

Their proposals can be grouped into three distinct categories, three hypotheses, which we have named Tron, Golem and Maupertuis…

[The authors unpack all three proposals…]

… Is life problem-solving matter? When thinking about our biotic origins, it is important to remember that most chemical reactions are not connected to life, whether they take place here or elsewhere in the Universe. Chemistry alone is not enough to identify life. Instead, researchers use adaptive function – a capacity for solving problems – as the primary evidence and filter for identifying the right kinds of biotic chemistry. If life is problem-solving matter, our origins were not a miraculous or rare event governed by chemical constraints but, instead, the outcome of far more universal principles of information and computation. And if life is understood through these principles, then perhaps it has come into existence more often than we previously thought, driven by problems as big as the bang that started our abiotic universe moving 13.8 billion years ago.

The physical account of the origin and evolution of the Universe is a purely mechanical affair, explained through events such as the Big Bang, the formation of light elements, the condensation of stars and galaxies, and the formation of heavy elements. This account doesn’t involve objectives, purposes, or problems. But the physics and chemistry that gave rise to life appear to have been doing more than simply obeying the fundamental laws. At some point in the Universe’s history, matter became purposeful. It became organised in a way that allowed it to adapt to its immediate environment. It evolved from a Babbage-like Difference Engine into a Turing-like Analytical Engine. This is the threshold for the origin of life.

In the abiotic universe, physical laws, such as the law of gravitation, are like ‘calculations’ that can be performed everywhere in space and time through the same basic input-output operations. For living organisms, however, the rules of life can be modified or ‘programmed’ to solve unique biological problems – these organisms can adapt themselves and their environments. That’s why, if the abiotic universe is a Difference Engine, life is an Analytical Engine. This shift from one to the other marks the moment when matter became defined by computation and problem-solving. Certainly, specialised chemistry was required for this transition, but the fundamental revolution was not in matter but in logic.

In that moment, there emerged for the first time in the history of the Universe a big problem to give the Big Bang a run for its money. To discover this big problem – to understand how matter has been able to adapt to a seemingly endless range of environments – many new theories and abstractions for measuring, discovering, defining and synthesising life have emerged in the past century. Some researchers have synthesised life in silico. Others have experimented with new forms of matter. And others have discovered new laws that may make life as inescapable as physics…

Eminently worth reading in full: “Problem-solving matter,” from @sfiscience and @aeonmag.

Pair with “At the limits of thought” (also by Krakauer).

* Albert Einstein

###

As we obsess on ontology, we might spare a thought for someone concerned with life as it is lived: Sigismund Schlomo “Sigmund” Freud; he died on this date in 1939. A neurologist, he was the founder of psychoanalysis– a clinical method for evaluating and treating pathologies seen as originating from conflicts in the psyche, through dialogue between patient and psychoanalyst, and the distinctive theory of mind and human agency derived from it.

source

Written by (Roughly) Daily

September 23, 2024 at 1:00 am

“The worst part about having a mental illness is people expect you to behave as if you don’t”*…

Trends across all causes and risks of disease/disability show that there have been substantial declines in infectious diseases, malnutrition, cardiovascular diseases, and several cancers. But even as we make strides in addressing physical health, mental health challenges are on the rise. In sharp contrast, mental health disorders and alcohol-related disability adjusted life years (DALYS) have increased sharply over the last few decades, especially among people aged 25 to 74.

The WHO found that the two most common mental disorders, anxiety and depression, cost global GDP
$1 trillion in 2010. Lost output for the same time period attributed to mental, neurological, and substance
abuse disorders – which often intersect – was estimated between $2.5-$8.5 trillion. This is expected to double by 2030.

A report from the Aspen Institute and Dalberg explores the global rise of mental illness through economics, lived experiences, and expert insights…

According to the World Health Organization (WHO), 450 million people suffer from some form of mental illness over the course of their lives. So, it’s no surprise that many of us have experienced, or know some-one who has experienced, severe struggles with mental health. This is a full-blown crisis exacerbated by a lack of infrastructure, lack of funding, and a lack of health equity. This is despite the fact that mental health issues are the leading cause of disability globally. Also, according to the WHO, mental health conditions are the primary cause of suicide. And suicide is the second leading cause of death for people age 15to 29. This is a crisis of our time.

In this report, we offer a snapshot into both the magnitude and the scope of the mental health crisis facing humanity. In addition to briefly framing the issues, we share summaries of dozens of interviews we held with both “expert practitioners” working both in the public and private sectors and individuals with a “lived experience” touched by mental health struggles.

In the course of our work, we looked for recurring themes that could promote a dialogue about seeking sustainable, scalable solutions to the crisis. Among those themes are the challenges of building an infrastructure for access to quality mental healthcare, the continued lack of parity between the provision of services for mental health versus physical health, and the pervasiveness of stigma associated with diseases of the mind.

Further, although most of us do not think of mental health as related to investing, and if we do, we might find the notion distasteful, there are indeed a growing number of developing technologies and treatment modalities that hold promise for expanding access to mental health services and offering innovative practices. We highlight a handful of examples. The individuals who generously shared their personal struggles also shared the resources and practices that they found most helpful.

We acknowledge the global nature of the crisis and the role that both the pandemic and other contextual factors have played in substantial increases in anxiety disorders and other mental health issues. Further, we are seeing increases in specific demographics, such as poorer mental health among women, with one in five women experience a more common mental disorder (such as anxiety or depression), compared with one in eight men. No demographic is immune.

Given the crisis at hand, it is our hope that offering greater transparency to the world of mental health will stimulate a search for solutions…

Bracing– but important– reading: “A Crisis of Our Time.”

(Image above from a series of photos illustrating mental illness, from Christian Sampson.)

* from the notebook of Arthur Fleck (AKA, The Joker), via Todd Phillips 2019 film Joker

###

As we care about care, we might recall that it was on this date in 2019 that the first presentation print of Todd Phillip’s film Joker was shipped to Italy, where it premiered at the Venice International Film Festival and won the Golden Lion, the fest’s top prize. The film went on to box office success and set records for an October release. It grossed over $1 billion; the first R-rated to do so. It received numerous accolades, including two Academy Award wins at the 92nd Academy Awards for Best Actor (Joaquin Phoenix) & Best Original Score (Hildur Guðnadóttir) out of 11 nominations including Best Picture, first DC film to score.

source

“Advances are made by answering questions. Discoveries are made by questioning answers.”*…

How does a one-dimensional string of molecules fold correctly into its innate three-dimensional shape? This question, known as the protein folding problem, was recently solved by artificial intelligence.

Three years ago, Google’s AlphaFold pulled off the biggest artificial intelligence breakthrough in science to date [see here]. Yasemin Saplakoglu explains how this has accelerated molecular research and kindled deep questions about why we do science….

In December 2020, when pandemic lockdowns made in-person meetings impossible, hundreds of computational scientists gathered in front of their screens to watch a new era of science unfold.

They were assembled for a conference, a friendly competition some of them had attended in person for almost three decades where they could all get together and obsess over the same question. Known as the protein folding problem, it was simple to state: Could they accurately predict the three-dimensional shape of a protein molecule from the barest of information — its one-dimensional molecular code? Proteins keep our cells and bodies alive and running. Because the shape of a protein determines its behavior, successfully solving this problem would have profound implications for our understanding of diseases, production of new medicines and insight into how life works.

At the conference, held every other year, the scientists put their latest protein-folding tools to the test. But a solution always loomed beyond reach. Some of them had spent their entire careers trying to get just incrementally better at such predictions. These competitions were marked by baby steps, and the researchers had little reason to think that 2020 would be any different.

They were wrong about that.

That week, a relative newcomer to the protein science community named John Jumper had presented a new artificial intelligence tool, AlphaFold2, which had emerged from the offices of Google DeepMind, the tech company’s artificial intelligence arm in London. Over Zoom, he presented data showing that AlphaFold2’s predictive models of 3D protein structures were over 90% accurate — five times better than those of its closest competitor.

In an instant, the protein folding problem had gone from impossible to painless. The success of artificial intelligence where the human mind had floundered rocked the community of biologists. “I was in shock,” said Mohammed AlQuraishi, a systems biologist at Columbia University’s Program for Mathematical Genomics, who attended the meeting. “A lot of people were in denial.”

But in the conference’s concluding remarks, its organizer John Moult left little room for doubt: AlphaFold2 had “largely solved” the protein folding problem — and shifted protein science forever. Sitting in front of a bookshelf in his home office in a black turtleneck, clicking through his slides on Zoom, Moult spoke in tones that were excited but also ominous. “This is not an end but a beginning,” he said…

[Saplakoglu tells the story of AlphaFold and of subsequent developments…]

… Seventy years ago, proteins were thought to be a gelatinous substance, Porter said. “Now look at what we can see”: structure after structure of a vast world of proteins, whether they exist in nature or were designed.

The field of protein biology is “more exciting right now than it was before AlphaFold,” Perrakis said. The excitement comes from the promise of reviving structure-based drug discovery, the acceleration in creating hypotheses and the hope of understanding complex interactions happening within cells.

“It [feels] like the genomics revolution,” AlQuraishi said. There is so much data, and biologists, whether in their wet labs or in front of their computers, are just starting to figure out what to do with it all.

But like other artificial intelligence breakthroughs sparking across the world, this one might have a ceiling.

AlphaFold2’s success was founded on the availability of training data — hundreds of thousands of protein structures meticulously determined by the hands of patient experimentalists. While AlphaFold3 and related algorithms have shown some success in determining the structures of molecular compounds, their accuracy lags behind that of their single-protein predecessors. That’s in part because there is significantly less training data available.

The protein folding problem was “almost a perfect example for an AI solution,” Thornton said, because the algorithm could train on hundreds of thousands of protein structures collected in a uniform way. However, the Protein Data Bank may be an unusual example of organized data sharing in biology. Without high-quality data to train algorithms, they won’t make accurate predictions.

“We got lucky,” Jumper said. “We met the problem at the time it was ready to be solved.”

No one knows if deep learning’s success at addressing the protein folding problem will carry over to other fields of science, or even other areas of biology. But some, like AlQuraishi, are optimistic. “Protein folding is really just the tip of the iceberg,” he said. Chemists, for example, need to perform computationally expensive calculations. With deep learning, these calculations are already being computed up to a million times faster than before, AlQuraishi said.

Artificial intelligence can clearly advance specific kinds of scientific questions. But it may get scientists only so far in advancing knowledge. “Historically, science has been about understanding nature,” AlQuraishi said — the processes that underlie life and the universe. If science moves forward with deep learning tools that reveal solutions and no process, is it really science?

“If you can cure cancer, do you care about how it really works?” AlQuraishi said. “It is a question that we’re going to wrestle with for years to come.”

If many researchers decide to give up on understanding nature’s processes, then artificial intelligence will not just have changed science — it will have changed the scientists too.

Meanwhile, the CASP organizers are wrestling with a different question: how to continue their competition and conference. AlphaFold2 is a product of CASP, and it solved the main problem the conference was organized to address. “It was a big shock for us in terms of: Just what is CASP anymore?” Moult said.

In 2022, the CASP meeting was held in Antalya, Turkey. Google DeepMind didn’t enter, but the team’s presence was felt. “It was more or less just people using AlphaFold,” Jones said. In that sense, he said, Google won anyway.

Some researchers are now less keen on attending. “Once I saw that result, I switched my research,” Xu said. Others continue to hone their algorithms. Jones still dabbles in structure prediction, but it’s more of a hobby for him now. Others, like AlQuraishi and Baker, continue on by developing new algorithms for structure prediction and design, undaunted by the prospect of competing against a multibillion-dollar company.

Moult and the conference organizers are trying to evolve. The next round of CASP opened for entries in May. He is hoping that deep learning will conquer more areas of structural biology, like RNA or biomolecular complexes. “This method worked on this one problem,” Moult said. “There are lots of other related problems in structural biology.”

The next meeting will be held in December 2024 by the aqua waters of the Caribbean Sea. The winds are cordial, as the conversation will probably be. The stamping has long since died down — at least out loud. What this year’s competition will look like is anyone’s guess. But if the past few CASPs are any indication, Moult knows to expect only one thing: “surprises.”…

When one door closes, another opens: “How AI Revolutionized Protein Science, but Didn’t End It,” from @yasemin_sap in @QuantaMagazine.

See also: “How Colorful Ribbon Diagrams Became the Face of Proteins” from the same author.

Bernard Haisch

###

As we ponder progress, we might spare a thought for Edmond H. Fischer; he died on this date in 2021. A biochemist, he and his collaborator, Edwin G. Krebs were awarded the Nobel Prize in Physiology or Medicine in 1992 for describing how reversible phosphorylation works as a switch to activate proteins and regulate a number of cellular processes. Their discovery was a key to unlocking how glycogen in the body breaks down into glucose. It fostered techniques that prevent the body from rejecting transplanted organs and opened new doors for research into cancer, blood pressure, inflammatory reactions, and brain signals.

source

Written by (Roughly) Daily

August 27, 2024 at 1:00 am

“Most of the knowledge and much of the genius of the research worker lie behind his selection of what is worth observing”*…

Schizosaccharomyces pombe yeast cells divide in a petri dish

As Molly Herring reports, there’s trouble in labs around the U.S. Scientists are struggling to figure out why—and how—the standard growth medium is disrupting their studies. For now, it’s simply a problem; but as Herring suggests, it could lead to an exciting new discovery…

Reine Protacio couldn’t figure out why all her cells kept dying. A molecular biologist at the University of Arkansas for Medical Sciences College of Medicine, she kept trying to grow colonies of fission yeast (Schizosaccharomyces pombe) on petri dishes plated with nutrients. The lab uses the microbes to study what happens to DNA during cell division, but even in the control experiments, none of the yeast survived. Protacio and her colleagues investigated several possible suspects—from dirty glassware to contaminated water—before landing on a surprising culprit: bad agar.

Derived from seaweed, agar is a gelatinlike ingredient used to grow yeast on a solid surface. It’s like flour in cake batter, Protacio says. “You’d never expect the flour—it’s the most basic thing.” And yet here was agar, foiling day after day of experiments. As is turned out, Protacio’s lab wasn’t alone.

When Protacio first identified the bad agar last summer, one of the heads of her lab, molecular biologist Wayne Wahls, posted about the find on a community email group called PombeList. Labs on entirely different continents responded that they faced what seemed like the same problem, even though their agar had come from different companies and lots, sometimes years apart.

Nick Rhind, a cell biologist at the University of Massachusetts Chan Medical School, reported his lab had received a toxic batch of agar as far back as 2006. He had sourced his supply from the same company that sold bad agar to Protacio’s lab: Sunrise Science Products.

The problem probably didn’t arise there, Rhind says. Sunrise and other lab supply companies don’t manufacture the agar themselves; they buy it from other firms that make it from two polysaccharides—agarose and agaropectin—found in the cell walls of red algae, a kind of seaweed. “My understanding was that there were very few suppliers,” Rhind says. “Everyone pretty much bought it from the same bulk supplier, packaged it, and sent it out.”…

[Herring unpacks the efforts to figure out what’s going wrong…]

… Getting to the bottom of the issue might be more trouble than it’s worth, however. Purifying and identifying an active compound is a long and complex process of elimination, Rhind says. For the time being, he says, labs and suppliers should take extra steps to avoid contamination wherever possible. This could include more thorough quality control tests using many different formulas and microbes. “I don’t think anyone is that interested in why the [yeast] died,” he says. “They just want to make sure it doesn’t happen again.”…

But that could be an opportunity lost…

[One] possibility, Rhind says, is that the red algae, other algae growing on it, or even bacteria eating the algae produce an antifungal compound, which would kill yeast. If so, a nuisance for microbiologists could be a boon for drug developers. “There are actually not that many good antifungals in the world,” he says. “It would be a serendipitous discovery, but it’s a long shot.”…

Dissipating dark clouds– and searching for silver linings: “Bad agar is killing lab yeast around the world. Where is it coming from?” by @mollymherring in @ScienceMagazine.

Alan Gregg

###

As we muse on media, we might spare a thought for Virginia Apgar; she died on this date in 1974. A physician and medical researcher, she is best remembered as the creator of what’s now known as the 10-point Apgar Score, a way quickly to assess the health of a newborn child immediately after birth in order to combat infant mortality. Given at one minute and five minutes after birth, the Apgar test measures a child’s breathing, skin color, reflexes, motion, and heart rate. As colleague observed, “she probably did more than any other physician to bring the problem of birth defects out of back rooms.” 

source

Written by (Roughly) Daily

August 7, 2024 at 1:00 am

“Old ways of thinking die hard, particularly when they were weaned by legally enforced monopolies”*…

According to the US Bureau of Labor Statistics, from 2000 to present, prices in the hospital industry have grown faster than prices in any other sector of the US economy. The $1.3 trillion US hospital sector accounts for 6% of US GDP, nearly a third of all health care spending (which is materially higher as a share of GDP in the U.S. than in any other country). The average price for an inpatient hospital stay is $25,000.

A new working paper from the NBER assesses the impact of these rising costs. From its abstract:

We analyze the economic consequences of rising health care prices in the US. Using exposure to price increases caused by horizontal hospital mergers as an instrument, we show that rising prices raise the cost of labor by increasing employer-sponsored health insurance premiums. A 1% increase in health care prices lowers both payroll and employment at firms outside the health sector by approximately 0.4%. At the county level, a 1% increase in health care prices reduces per capita labor income by 0.27%, increases flows into unemployment by approximately 0.1 percentage points (1%), lowers federal income tax receipts by 0.4%, and increases unemployment insurance payments by 2.5%. The increases in unemployment we observe are concentrated among workers earning between $20,000 and $100,000 annually. Finally, we estimate that a 1% increase in health care prices leads to a 1 per 100,000 population (2.7%) increase in deaths from suicides and overdoses. This implies that approximately 1 in 140 of the individuals who become fully separated from the labor market after health care prices increase die from a suicide or drug overdose.

NBER WORKING PAPER SERIES- WHO PAYS FOR RISING HEALTH CARE PRICES? EVIDENCE FROM HOSPITAL MERGERS

Four of the authors of that paper looked more deeply into the issue, exploring why those costs are rising; they identified consolidation in the hospital sector– 90% of hospital markets are now highly concentrated, according to the thresholds set by the FTC and the U.S. Department of Justice– as a key culprit:

The study, conducted in collaboration with researchers at Harvard University, Yale University, and the University of Wisconsin-Madison, found that of 1,164 mergers among the nation’s approximately 5,000 acute-care hospitals that occurred in the United States from 2000 to 2020, the Federal Trade Commission (FTC), which is tasked with preserving competition, challenged only 13 of them — an enforcement rate of about 1%.

Meanwhile, the researchers show that the FTC, using standard screening tools available to the agency during that period, could have flagged 20% of the mergers — 238 transactions — as likely to cause reduced competition and increase prices…

Unchallenged hospital mergers should have had minimal effects on competition and prices if the FTC were optimally targeting enforcement, the researchers noted. However, using data on the prices that hospitals negotiate with private insurers, the researchers found that mergers the FTC could have challenged as predictably anti-competitive between 2010 and 2015 eventually led to price increases of 5% or more.

The researchers estimate that the 53 hospital mergers that occurred on average annually from 2010 to 2015 raised health spending on the privately insured by $204 million in the following year alone. Putting this spending increase in context, the researchers note that the FTC’s average annual budget and antitrust enforcement budget between 2010 and 2015 were $315 and $136 million, respectively…

The study found that mergers in rural regions and areas with lower incomes and higher rates of poverty generated larger average price increases, often in outpatient services. The researchers suggest this occurred because those regions — compared with higher income, urban settings—have fewer free-standing clinics that offer surgical and imaging services that compete against hospitals in the outpatient market…

Consolidation in Hospital Sector Leading to Higher Health Care Costs

As Cory Doctorow succinctly observes…

The health system is a perfect example of how monopolization drives more monopolization, and how that comes to harm the public and workers. Health consolidation began with pharma mergers, that led to pharma companies gouging hospitals. Hospitals, in turn, engaged in a nonstop orgy of mergers, which created regional monopolies that could resist the pricing power of monopoly pharma – and screw insurers. That kicked off consolidation in insurance, which is why most Americans have a “choice” of between one and three private insurers – and why health workers’ monopoly employers have eroded their wages and working conditions.

Pluralistic

How consolidation in the hospital sector is increasing healthcare prices and creating even steeper costs more broadly in the economy. @nberpubs @AEAjournals @doctorow

* Mitch Kapor

###

As we measure our blood pressure, we might send concerned birthday greetings to Janette Sherman; she was born on this date in 1930. A physician, toxicologist, author, and activist. She researched pesticides, nuclear radiation, birth defects, breast cancer, and illnesses caused by toxins in homes and was a pioneer in the field of occupational and environmental health.

Dr. Sherman served as a medical-legal expert witness in more than 5,000 workers’ compensation claims and served as an expert witness for residents in communities affected by environmental hazards, most famously the Love Canal neighborhood of Niagara Falls, N.Y. Her medical-legal files, among the largest collections of their kind in the United States, are preserved at the National Library of Medicine at the National Institutes of Health in Bethesda, Md.

source