(Roughly) Daily

Posts Tagged ‘research

“The clustering of technological innovation in time and space helps explain both the uneven growth among nations and the rise and decline of hegemonic powers”*…

As scholars like Robert Gordon and Tyler Cowan have begun to call out a slowing of progress and growth in the U.S., others are beginning to wonder if “innovation clusters” like Silicon Valley are still advantageous. For example, Brian J. Asquith

In 2011, the economist Tyler Cowen published The Great Stagnation, a short treatise with a provocative hypothesis. Cowen challenged his audience to look beyond the gleam of the internet and personal compu­ting, arguing that these innovations masked a more troubling reality. Cowen contended that, since the 1970s, there has been a marked stagna­tion in critical economic indicators: median family income, total factor productivity growth, and average annual GDP growth have all plateaued…

In the years since the publication of the Great Stagnation hypothesis, others have stepped forward to offer support for this theory. Robert Gordon’s 2017 The Rise and Fall of American Growth chronicles in engrossing detail the beginnings of the Second Industrial Revolution in the United States, starting around 1870, the acceleration of growth spanning the 1920–70 period, and then a general slowdown and stagnation since about 1970. Gordon’s key finding is that, while the growth rate of average total factor productivity from 1920 to 1970 was 1.9 percent, it was just 0.6 percent from 1970 to 2014, where 1970 represents a secular trend break for reasons still not entirely understood. Cowen’s and Gordon’s insights have since been further corroborated by numerous research papers. Research productivity across a variety of measures (researchers per paper, R&D spending needed to maintain existing growth rates, etc.) has been on the decline across the developed world. Languishing productivity growth extends beyond research-intensive industries. In sectors such as construction, the value added per worker was 40 percent lower in 2020 than it was in 1970. The trend is mirrored in firm productivity growth, where a small number of superstar firms see exceptionally strong growth and the rest of the distribution increasingly lags behind.

A 2020 article by Nicholas Bloom and three coauthors in the American Economic Review cut right to the chase by asking, “Are Ideas Getting Harder to Find?,” and answered its own question in the affirm­ative.6 Depending on the data source, the authors find that while the number of researchers has grown sharply, output per researcher has declined sharply, leading aggregate research productivity to decline by 5 percent per year.

This stagnation should elicit greater surprise and concern because it persists despite advanced economies adhering to the established eco­nomics prescription intended to boost growth and inno­vation rates: (1) promote mass higher education, (2) identify particularly bright young people via standardized testing and direct them to re­search‑intensive universities, and (3) pipe basic research grants through the university system to foster locally-driven research and development networks that supercharge productivity…

… the tech cluster phenomenon stands out because there is a fundamental discrepancy between how the clusters function in practice versus their theoretical contributions to greater growth rates. The emergence of tech clusters has been celebrated by many leading economists because of a range of findings that innovative people become more productive (by various metrics) when they work in the same location as other talented people in the same field. In this telling, the essence of innovation can be boiled down to three things: co-location, co-location, co-location. No other urban form seems to facili­tate innovation like a cluster of interconnected researchers and firms.

This line of reasoning yields a straightforward syllogism: technology clusters enhance individual innovation and productivity. The local na­ture of innovation notwithstanding, technologies developed within these clusters can be adopted and enjoyed globally. Thus, while not everyone can live in a tech cluster, individuals worldwide benefit from new advances and innovations generated there, and some of the outsized economic gains the clusters produce can then be redistributed to people outside of the clusters to smooth over any lingering inequalities. There­fore, any policy that weakens these tech clusters leads to a diminished rate of innovation and leaves humanity as a whole poorer.

Yet the fact that the emergence of the tech clusters has also coincided with Cowen’s Great Stagnation raises certain questions. Are there shortcomings in the empirical evidence on the effects of the tech clusters? Does technology really diffuse across the rest of the economy as many economists assume? Do the tech clusters inherently prioritize welfare-enhancing technologies? Is there some role for federal or state action to improve the situation? Clusters are not unique to the postwar period: Detroit famously achieved a large agglomeration economy based on automobiles in the early twentieth century, and several authors have drawn parallels between the ascents of Detroit and Silicon Valley. What makes today’s tech clusters distinct from past ones? The fact that the tech clusters have not yielded the same society-enhancing benefits that they once promised should invite further scrutiny…

How could this be? What can we do about it? Eminently worth reading in full: “Superstars or Black Holes: Are Tech Clusters Causing Stagnation?” (possible soft paywall), from @basquith827.

See also: Brad DeLong, on comments from Eric Schmidt: “That an externality market failure is partly counterbalanced and offset by a behavioral-irrationality-herd-mania cognitive failure is a fact about the world. But it does not mean that we should not be thinking and working very hard to build a better system—or that those who profit mightily from herd mania on the part of others should feel good about themselves.”

* Robert Gilpin

###

As we contemplate co-location, we might recall that it was on this date in 1956 that a denizen of one of America’s leading tech/innovation hubs, Jay Forrester at MIT [see here and here], was awarded a patent for his coincident current magnetic core memory (Patent No. 2,736,880). Forrester’s invention, a “multicoordinate digital information storage device,” became the standard memory device for digital computers until supplanted by solid state (semiconductor) RAM in the mid-1970s.

source

“The pursuit of science is a grand adventure, driven by curiosity, fueled by passion, and guided by reason”*…

Adam Mastroianni on how science advances (and how it’s held back), with a provocative set of suggestions for how it might be accelerated…

There are two kinds of problems in the world: strong-link problems and weak-link problems.

Weak-link problems are problems where the overall quality depends on how good the worst stuff is. You fix weak-link problems by making the weakest links stronger, or by eliminating them entirely.

Food safety, for example, is a weak-link problem. You don’t want to eat anything that will kill you. That’s why it makes sense for the Food and Drug Administration to inspect processing plants, to set standards, and to ban dangerous foods…

Weak-link problems are everywhere. A car engine is a weak-link problem: it doesn’t matter how great your spark plugs are if your transmission is busted. Nuclear proliferation is a weak-link problem: it would be great if, say, France locked up their nukes even tighter, but the real danger is some rogue nation blowing up the world. Putting on too-tight pants is a weak-link problem: they’re gonna split at the seams.

It’s easy to assume that all problems are like this, but they’re not. Some problems are strong-link problems: overall quality depends on how good the best stuff is, and the bad stuff barely matters. Like music, for instance. You listen to the stuff you like the most and ignore the rest. When your favorite band releases a new album, you go “yippee!” When a band you’ve never heard of and wouldn’t like anyway releases a new album, you go…nothing at all, you don’t even know it’s happened. At worst, bad music makes it a little harder for you to find good music, or it annoys you by being played on the radio in the grocery store while you’re trying to buy your beetle-free asparagus…

Strong-link problems are everywhere; they’re just harder to spot. Winning the Olympics is a strong-link problem: all that matters is how good your country’s best athletes are. Friendships are a strong-link problem: you wouldn’t trade your ride-or-dies for better acquaintances. Venture capital is a strong-link problem: it’s fine to invest in a bunch of startups that go bust as long as one of them goes to a billion…

In the long run, the best stuff is basically all that matters, and the bad stuff doesn’t matter at all. The history of science is littered with the skulls of dead theories. No more phlogiston nor phlegm, no more luminiferous ether, no more geocentrism, no more measuring someone’s character by the bumps on their head, no more barnacles magically turning into geese, no more invisible rays shooting out of people’s eyes, no more plum pudding

Our current scientific beliefs are not a random mix of the dumbest and smartest ideas from all of human history, and that’s because the smarter ideas stuck around while the dumber ones kind of went nowhere, on average—the hallmark of a strong-link problem. That doesn’t mean better ideas win immediately. Worse ideas can soak up resources and waste our time, and frauds can mislead us temporarily. It can take longer than a human lifetime to figure out which ideas are better, and sometimes progress only happens when old scientists die. But when a theory does a better job of explaining the world, it tends to stick around.

(Science being a strong-link problem doesn’t mean that science is currently strong. I think we’re still living in the Dark Ages, just less dark than before.)

Here’s the crazy thing: most people treat science like it’s a weak-link problem.

Peer reviewing publications and grant proposals, for example, is a massive weak-link intervention. We spend ~15,000 collective years of effort every year trying to prevent bad research from being published. We force scientists to spend huge chunks of time filling out grant applications—most of which will be unsuccessful—because we want to make sure we aren’t wasting our money…

I think there are two reasons why scientists act like science is a weak-link problem.

The first reason is fear. Competition for academic jobs, grants, and space in prestigious journals is more cutthroat than ever. When a single member of a grant panel, hiring committee, or editorial board can tank your career, you better stick to low-risk ideas. That’s fine when we’re trying to keep beetles out of asparagus, but it’s not fine when we’re trying to discover fundamental truths about the world…

The second reason is status. I’ve talked to a lot of folks since I published The rise and fall of peer review and got a lot of comments, and I’ve realized that when scientists tell me, “We need to prevent bad research from being published!” they often mean, “We need to prevent people from gaining academic status that they don’t deserve!” That is, to them, the problem with bad research isn’t really that it distorts the scientific record. The problem with bad research is that it’s cheating

I get that. It’s maddening to watch someone get ahead using shady tactics, and it might seem like the solution is to tighten the rules so we catch more of the cheaters. But that’s weak-link thinking. The real solution is to care less about the hierarchy

Here’s our reward for a generation of weak-link thinking.

The US government spends ~10x more on science today than it did in 1956, adjusted for inflation. We’ve got loads more scientists, and they publish way more papers. And yet science is less disruptive than ever, scientific productivity has been falling for decades, and scientists rate the discoveries of decades ago as worthier than the discoveries of today. (Reminder, if you want to blame this on ideas getting harder to find, I will fight you.)…

Whether we realize it or not, we’re always making calls like this. Whenever we demand certificates, credentials, inspections, professionalism, standards, and regulations, we are saying: “this is a weak-link problem; we must prevent the bad!”

Whenever we demand laissez-faire, the cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link problem; we must promote the good!”

When we get this right, we fill the world with good things and rid the world of bad things. When we don’t, we end up stunting science for a generation. Or we end up eating a lot of asparagus beetles…

Science is a strong-link problem,” from @a_m_mastroianni in @science_seeds.

* James Clerk Maxwell

###

As we ponder the process of progress, we might spare a thought for Sir Christopher Wren; he died on this date in 1723.  A mathematician and astronomer (who co-founded and later served as president of the Royal Society), he is better remembered as one of the most highly acclaimed English architects in history; he was given responsibility for rebuilding 52 churches in the City of London after the Great Fire in 1666, including what is regarded as his masterpiece, St. Paul’s Cathedral, on Ludgate Hill.

Wren, whose scientific work ranged broadly– e.g., he invented a “weather clock” similar to a modern barometer, new engraving methods, and helped develop a blood transfusion technique– was admired by Isaac Newton, as Newton noted in the Principia.

 source

“The spirit of inquiry and the courage to challenge the status quo are at the heart of scientific progress”*…

Adam Mastroianni on the challenges– and opportunities– facing science…

Randomized-controlled trials only caught on about 80 years ago, and whenever I think about that, I have to sit down and catch my breath for a while. The thing everybody agrees is the “gold standard” of evidence, the thing the FDA requires before it will legally allow you to sell a drug—that thing is younger than my grandparents.

There are a few records of things that kind of look like randomized-controlled trials throughout history, but people didn’t really appreciate the importance of RCTs until 1948, when the British Medical Research Council published a trial on streptomycin for tuberculosis. Humans have possessed the methods of randomization for thousands of years—dice, coins, the casting of lots—and we’ve been trying to cure diseases for as long as we’ve been human. Why did it take us so long to put them together?

I think the answer is: first, we had to stop trusting Zeus.

To us, coin flips are random (“Heads: I go first. Tails: you go first.”). But to an ancient human, coin flips aren’t random at all—they reveal the will of the gods (“Heads: Zeus wants me to go first. Tails: Zeus wants you to go first”). In the Bible, for instance, people are always casting lots to figure out what God wants them to do: which goat to kill, who should get each tract of land, when to start a genocide, etc.

This is, of course, a big problem for running RCTs. If you think that the outcome of a coin flip is meaningful rather than meaningless, you can’t use it to produce two equivalent groups, and you can’t study the impact of doing something to one group and not the other. You can only run a ZCT—a Zeus controlled trial.

It’s easy to see how technology can lead to scientific discoveries. Make microscope -> discover mitochondria.

Clearly, though, sometimes those technologies get invented entirely inside our heads. Stop trusting Zeus -> develop RCTs.

Which raises the question: what mental technologies haven’t we invented yet? What brain switches are just waiting to be flipped?…

On reinvigorating science: “Declining trust in Zeus is a technology,” from @a_m_mastroianni.

Apposite to an issue he raises: “Citation cartels help some mathematicians—and their universities—climb the rankings,” from @ScienceMagazine.

[Image above: source]

Elizabeth Blackwell

###

As we deliberate on discovery, we might send micro-biological birthday greetings to a woman who modeled the attitude and behavior that Mastroianni suggests: Ruth Sager; she was born on this date in 1918. A pioneering geneticist, she had, in effect, two careers.

In the 1950s and 1960s, she pioneered the field of cytoplasmic genetics by discovering transmission of genetic traits through chloroplast DNA, the first known example of genetics not involving the cell nucleus. She identified a second set of genes were found outside of the cell’s nucleus, which, even though they were nonchrosomomal, also influenced inherited characteristics. The academic community did not acknowledge the significance of her contribution until after the second wave of feminism in the 1970s.

Then, in the early 1970s, she moved into cancer genetics (with a specific focus on breast cancer); she proposed and investigated the roles of tumor suppressor genes. She identified over 100 potential tumor suppressor genes, developed cell culture methods to study normal and cancerous human and other mammalian cells in the laboratory, and pioneered the research into “expression genetics,” the study of altered gene expression.

source

“Any fool can know. The point is to understand.”*…

A corridor in King’s College, Cambridge, England dating from the 15th century

… and, Rachael Scarborough King and Seth Rudy argue, to serve a clear purpose…

Right now, many forms of knowledge production seem to be facing their end. The crisis of the humanities has reached a tipping point of financial and popular disinvestment, while technological advances such as new artificial intelligence programmes may outstrip human ingenuity. As news outlets disappear, extreme political movements question the concept of objectivity and the scientific process. Many of our systems for producing and certifying knowledge have ended or are ending.

We want to offer a new perspective by arguing that it is salutary – or even desirable – for knowledge projects to confront their ends. With humanities scholars, social scientists and natural scientists all forced to defend their work, from accusations of the ‘hoax’ of climate change to assumptions of the ‘uselessness’ of a humanities degree, knowledge producers within and without academia are challenged to articulate why they do what they do and, we suggest, when they might be done. The prospect of an artificially or externally imposed end can help clarify both the purpose and endpoint of our scholarship.

We believe the time has come for scholars across fields to reorient their work around the question of ‘ends’. This need not mean acquiescence to the logics of either economic utilitarianism or partisan fealty that have already proved so damaging to 21st-century institutions. But avoiding the question will not solve the problem. If we want the university to remain a viable space for knowledge production, then scholars across disciplines must be able to identify the goal of their work – in part to advance the Enlightenment project of ‘useful knowledge’ and in part to defend themselves from public and political mischaracterisation.

Our volume The Ends of Knowledge: Outcomes and Endpoints Across the Arts and Sciences (2023) asks how we should understand the ends of knowledge today. What is the relationship between an individual knowledge project – say, an experiment on a fruit fly, a reading of a poem, or the creation of a Large Language Model – and the aim of a discipline or field? In areas ranging from physics to literary studies to activism to climate science, we asked practitioners to consider the ends of their work – its purpose – as well as its end: the point at which it might be complete. The responses showed surprising points of commonality in identifying the ends of knowledge, as well as the value of having the end in sight…

Read on for a provocative case that academics need to think harder about the purpose of their disciplines and a consideration of whether some of those should come to an end: “The Ends of Knowledge,” in @aeonmag.

* Albert Einstein

###

As we contemplate conclusions, we might recall that it was on this date in 1869 that the first issue of the journal Nature was published.  Taking it’s title from a line of Wordsworth’s (“To the solid ground of nature trusts the Mind that builds for aye”), its aim was to “provide cultivated readers with an accessible forum for reading about advances in scientific knowledge.”  It remains a weekly, international, interdisciplinary journal of science, one of the few remaining that publish across a wide array of fields.  It is consistently ranked the world’s most cited scientific journal and is ascribed an impact factor of approximately 64.8, making it one of the world’s top academic journals.

Nature‘s first first page (source)

“No problem can be solved from the same level of consciousness that created it”*…

Annaka Harris on the difficulty in understanding consciousness…

The central challenge to a science of consciousness is that we can never acquire direct evidence of consciousness apart from our own experience. When we look at all the organisms (or collections of matter) in the universe and ask ourselves, “Which of these collections of matter contain conscious experiences?” in the broadest sense, the answer has to be “some” or “all”—the only thing we have direct evidence to support is that the answer isn’t “none,” as we know that at least our own conscious experiences exist.

Until we attain a significantly more advanced understanding of the brain, and of many other systems in nature for that matter, we’re forced to begin with one of two assumptions: either consciousness arises at some point in the physical world, or it is a fundamental part of the physical world (some, or all). And the sciences have thus far led with the assumption that the answer is “some” (and so have I, for most of my career) for understandable reasons. But I would argue that the grounds for this starting assumption have become weaker as we learn more about the brain and the role consciousness plays in behavior.

The problem is that what we deem to be conscious processes in nature is based solely on reportability. And at the very least, the work with split-brain and locked-in patients should have radically shifted our reliance on reportability at this point…

The realization that all of our scientific investigations of consciousness are unwittingly rooted in a blind assumption led me to pose two questions that I think are essential for a science of consciousness to keep asking:

  1. Can we find conclusive evidence of consciousness from outside a system?
  2. Is consciousness causal? (Is it doing something? Is it driving any behavior?)

The truth is that we have less and less reason to respond “yes” to either question with any confidence.And if the answer to these questions is in fact “no,” which is entirely possible, we’ll be forced to reconsider our jumping off point. Personally I’m still agnostic, putting the chances that consciousness is fundamental vs. emergent at more or less 50/50. But after focusing on this topic for more than twenty years, I’m beginning to think that assuming consciousness is fundamental is actually a slightly more coherent starting place…

The Strong Assumption,” from @annakaharris.

See also: “How Do We Think Beyond Our Own Existence?“, from @annehelen.

* Albert Einstein

###

As we noodle on knowing, we might recall that it was on this date in 1987 that a patent (U.S. Patent No. 4,666,425) was awarded to Chet Fleming for a “Device for Perfusing an Animal Head”– a device for keeping a severed head alive.

That device, described as a “cabinet,” used a series of tubes to accomplish what a body does for most heads that are not “discorped”—that is, removed from their bodies. In the patent application, Fleming describes a series of tubes that would circulate blood and nutrients through the head and take deoxygenated blood away, essentially performing the duties of a living thing’s circulatory system. Fleming also suggested that the device might also be used for grimmer purposes.  

“If desired, waste products and other metabolites may be removed from the blood, and nutrients, therapeutic or experimental drugs, anti-coagulants and other substances may be added to the blood,” the patent reads.

Although obviously designed for research purposes, the patent does acknowledge that “it is possible that after this invention has been thoroughly tested on research animals, it might also be used on humans suffering from various terminal illnesses.”

Fleming, a trained lawyer who had the reputation of being an eccentric, wasn’t exactly joking, but he was worried that somebody would start doing this research. The patent was a “prophetic patent”—that is, a patent for something which has never been built and may never be built. It was likely intended to prevent others from trying to keep severed heads alive using that technology…

Smithsonian Magazine
An illustration from the patent application (source)

Written by (Roughly) Daily

May 19, 2023 at 1:00 am