Posts Tagged ‘research’
“Any fool can know. The point is to understand.”*…
… and, Rachael Scarborough King and Seth Rudy argue, to serve a clear purpose…
Right now, many forms of knowledge production seem to be facing their end. The crisis of the humanities has reached a tipping point of financial and popular disinvestment, while technological advances such as new artificial intelligence programmes may outstrip human ingenuity. As news outlets disappear, extreme political movements question the concept of objectivity and the scientific process. Many of our systems for producing and certifying knowledge have ended or are ending.
We want to offer a new perspective by arguing that it is salutary – or even desirable – for knowledge projects to confront their ends. With humanities scholars, social scientists and natural scientists all forced to defend their work, from accusations of the ‘hoax’ of climate change to assumptions of the ‘uselessness’ of a humanities degree, knowledge producers within and without academia are challenged to articulate why they do what they do and, we suggest, when they might be done. The prospect of an artificially or externally imposed end can help clarify both the purpose and endpoint of our scholarship.
We believe the time has come for scholars across fields to reorient their work around the question of ‘ends’. This need not mean acquiescence to the logics of either economic utilitarianism or partisan fealty that have already proved so damaging to 21st-century institutions. But avoiding the question will not solve the problem. If we want the university to remain a viable space for knowledge production, then scholars across disciplines must be able to identify the goal of their work – in part to advance the Enlightenment project of ‘useful knowledge’ and in part to defend themselves from public and political mischaracterisation.
Our volume The Ends of Knowledge: Outcomes and Endpoints Across the Arts and Sciences (2023) asks how we should understand the ends of knowledge today. What is the relationship between an individual knowledge project – say, an experiment on a fruit fly, a reading of a poem, or the creation of a Large Language Model – and the aim of a discipline or field? In areas ranging from physics to literary studies to activism to climate science, we asked practitioners to consider the ends of their work – its purpose – as well as its end: the point at which it might be complete. The responses showed surprising points of commonality in identifying the ends of knowledge, as well as the value of having the end in sight…
Read on for a provocative case that academics need to think harder about the purpose of their disciplines and a consideration of whether some of those should come to an end: “The Ends of Knowledge,” in @aeonmag.
* Albert Einstein
###
As we contemplate conclusions, we might recall that it was on this date in 1869 that the first issue of the journal Nature was published. Taking it’s title from a line of Wordsworth’s (“To the solid ground of nature trusts the Mind that builds for aye”), its aim was to “provide cultivated readers with an accessible forum for reading about advances in scientific knowledge.” It remains a weekly, international, interdisciplinary journal of science, one of the few remaining that publish across a wide array of fields. It is consistently ranked the world’s most cited scientific journal and is ascribed an impact factor of approximately 64.8, making it one of the world’s top academic journals.

“No problem can be solved from the same level of consciousness that created it”*…
Annaka Harris on the difficulty in understanding consciousness…
The central challenge to a science of consciousness is that we can never acquire direct evidence of consciousness apart from our own experience. When we look at all the organisms (or collections of matter) in the universe and ask ourselves, “Which of these collections of matter contain conscious experiences?” in the broadest sense, the answer has to be “some” or “all”—the only thing we have direct evidence to support is that the answer isn’t “none,” as we know that at least our own conscious experiences exist.
Until we attain a significantly more advanced understanding of the brain, and of many other systems in nature for that matter, we’re forced to begin with one of two assumptions: either consciousness arises at some point in the physical world, or it is a fundamental part of the physical world (some, or all). And the sciences have thus far led with the assumption that the answer is “some” (and so have I, for most of my career) for understandable reasons. But I would argue that the grounds for this starting assumption have become weaker as we learn more about the brain and the role consciousness plays in behavior.
The problem is that what we deem to be conscious processes in nature is based solely on reportability. And at the very least, the work with split-brain and locked-in patients should have radically shifted our reliance on reportability at this point…
The realization that all of our scientific investigations of consciousness are unwittingly rooted in a blind assumption led me to pose two questions that I think are essential for a science of consciousness to keep asking:
- Can we find conclusive evidence of consciousness from outside a system?
- Is consciousness causal? (Is it doing something? Is it driving any behavior?)
The truth is that we have less and less reason to respond “yes” to either question with any confidence.And if the answer to these questions is in fact “no,” which is entirely possible, we’ll be forced to reconsider our jumping off point. Personally I’m still agnostic, putting the chances that consciousness is fundamental vs. emergent at more or less 50/50. But after focusing on this topic for more than twenty years, I’m beginning to think that assuming consciousness is fundamental is actually a slightly more coherent starting place…
“The Strong Assumption,” from @annakaharris.
See also: “How Do We Think Beyond Our Own Existence?“, from @annehelen.
* Albert Einstein
###
As we noodle on knowing, we might recall that it was on this date in 1987 that a patent (U.S. Patent No. 4,666,425) was awarded to Chet Fleming for a “Device for Perfusing an Animal Head”– a device for keeping a severed head alive.
That device, described as a “cabinet,” used a series of tubes to accomplish what a body does for most heads that are not “discorped”—that is, removed from their bodies. In the patent application, Fleming describes a series of tubes that would circulate blood and nutrients through the head and take deoxygenated blood away, essentially performing the duties of a living thing’s circulatory system. Fleming also suggested that the device might also be used for grimmer purposes.
“If desired, waste products and other metabolites may be removed from the blood, and nutrients, therapeutic or experimental drugs, anti-coagulants and other substances may be added to the blood,” the patent reads.
Although obviously designed for research purposes, the patent does acknowledge that “it is possible that after this invention has been thoroughly tested on research animals, it might also be used on humans suffering from various terminal illnesses.”
Fleming, a trained lawyer who had the reputation of being an eccentric, wasn’t exactly joking, but he was worried that somebody would start doing this research. The patent was a “prophetic patent”—that is, a patent for something which has never been built and may never be built. It was likely intended to prevent others from trying to keep severed heads alive using that technology…
Smithsonian Magazine

“You must not fool yourself, and you are the easiest person to fool”*…

The quest for room-temperature superconducting seems a bit like the hunt for the Holy Grail. A superconductor is a material that will transmit electricity with no resistance– thus very quickly and with no loss. (Estimates of loss in the U.S. electric grid, most of it due to heat loss from resistance in transmission, range from 5-10%; at the low end, that’s enough to power all seven Central American countries four times over.) Beyond that (already extraordinary) benefit, superconductivity could enable high-efficiency electric motors, maglev trains, low-cost magnets for MRI and nuclear fusion, a promising form of quantum computing (superconducting qubits), and much, much more.
Superconductivity was discovered in 1911, and has been the subject of fervent study ever since; indeed, four Nobel prizes have gone to scientists working on it, most recently in 2003. But while both understanding and application have advanced, it has remained the case that superconductivity can only be achieved at very low temperatures (or very high pressures). Until the mid-80s, it was believed that it could be established only below 30 Kelvin (-405.67 degrees Farenheit); by 2015, scientists had gotten that up to 80 K (-316 degrees Farenheit)… that’s to say, still requiring way too much cooling to be widely practical.
So imagine the excitement earlier this month, when…
In a packed talk on Tuesday afternoon at the American Physical Society’s annual March meeting in Las Vegas, Ranga Dias, a physicist at the University of Rochester, announced that he and his team had achieved a century-old dream of the field: a superconductor that works at room temperature and near-room pressure. Interest was so intense in the presentation that security personnel stopped entry to the overflowing room more than fifteen minutes before the talk. They could be overheard shooing curious onlookers away shortly before Dias began speaking.
The results, published in Nature, appear to show that a conventional conductor — a solid composed of hydrogen, nitrogen and the rare-earth metal lutetium — was transformed into a flawless material capable of conducting electricity with perfect efficiency.
While the announcement has been greeted with enthusiasm by some scientists, others are far more cautious, pointing to the research group’s controversial history of alleged research malfeasance. (Dias strongly denies the accusations.) Reactions by 10 independent experts contacted by Quanta ranged from unbridled excitement to outright dismissal…
Interesting if true– a paper in Nature divides the research community: “Room-Temperature Superconductor Discovery Meets With Resistance,” from @QuantaMagazine.
* Richard Feynman
###
As we review research, we might pause, on Pi Day, for a piece of pi(e)…

… in celebration of Albert Einstein’s birthday; he was born on this date in 1879.

“Everything should be made as simple as possible, but not simpler.”
“For the sake of the science, it might be time for scientists to start trusting each other a little less”*…
We’ve looked at methodical problems in scientific and medical research before (see here, here, and here). Let us turn now to outright dishonesty. The rising number of retracted research papers suggests that either medical research fraud is on the rise or that efforts to spot it are getting better. Either way, it’s a problem …
… Partly or entirely fabricated papers are being found in ever-larger numbers, thanks to sleuths like Dr Mol. Retraction Watch, an online database, lists nearly 19,000 papers on biomedical-science topics that have been retracted (see chart 1). In 2022 there were about 2,600 retractions in this area—more than twice the number in 2018. Some were the results of honest mistakes, but misconduct of one sort or another is involved in the vast majority of them…
… Yet journals can take years to retract, if they ever do so. Going by these numbers, roughly one in 1,000 papers gets retracted. That does not sound too bad. However, Ivan Oransky, one of Retraction Watch’s founders, reckons, based on various studies of the matter and reports from sleuths, that something more like one in 50 papers has results which are unreliable because of fabrication, plagiarism or serious errors…
… It is often asserted that science is self-correcting. And it is true that, if a claimed result is important enough, an inability to replicate it or of subsequent work to conform to it will eventually be noticed. In the short term, though, it is easy to hide in the shadows. Even co-authors of a data-fabricating scientist—those, in other words, who are closest to him or her—may not notice what the culprit is up to. In complex studies of a particular disease, several types of researchers will be involved, who are, by definition, not experts in each other’s fields. As Dr Bishop observes, “You just tend to take on trust the bits of data that somebody else has given you.”…
In the end, however, keeping fakes out of the scientific record depends on the willingness of publishers to stump up more resources. Statistical checks of clinical-trial papers often involve laborious manual work, such as typing up specific data in spreadsheets. This would require journals to hire dedicated staff, cutting into profits.
Many academics who have spent years trying to get fabricated papers retracted are pessimistic that better ways to detect fraud will, alone, make a big difference. Dr Roberts and Dr Mol want journals to be regulated in the way that social media and the news business are in some countries, with standards on what they publish. Peter Wilmshurst, a British cardiologist who has raised the alarm about numerous cases of research misconduct in his field, thinks there should be criminal penalties for those who fabricate data. Dr Gunsalus wants universities to make public the reports from their research-fraud investigations. And everyone agrees that publish or perish is a recipe for disaster.
None of these solutions will be quick or straightforward. But it is now clear that choosing to look the other way is causing palpable harm to patients…
“There is a worrying amount of fraud in medical research- and a worrying unwillingness to do anything about it,” from @TheEconomist.
* Stuart Ritchie, Science Fictions
###
As we look harder, we might spare a thought for Alfred Habdank Skarbek Korzybski; he died on this date in 1950. Trained as an engineer, he developed a field called general semantics, which he viewed as both distinct from, and more encompassing than, the field of semantics. He argued that human knowledge of the world is limited both by the human nervous system and the languages humans have developed, and thus no one can have direct access to reality, given that the most we can know is that which is filtered through the brain’s responses to reality. (Korzybski assumed that the quest for knowledge was an authentic, honest one; that said, if “human nervous system” an be understood to extend to “human nature”…)
Korzybski was influential in fields across the sciences and humanities through the 1940s and 50s (perhaps most notably, gestalt therapists), and inspired science fiction writers (like Robery Heinlein and A.E. van Vogt) and philosophers like Alan Watts.
His best known dictum is “The map is not the territory.”
“My fake plants died because I didn’t pretend to water them”*…
Your correspondent treasures Wikipedia, and uses it often. But as Marco Silva points out, it has its vulnerabilities…
“I read through Wikipedia a lot when I’m bored in class,” says Adam, aged 15, who studies photography and ICT at a school in Kent. One day last July, one of his teachers mentioned the online encyclopaedia’s entry about Alan MacMasters, who it said was a Scottish scientist from the late 1800s and had invented “the first electric bread toaster”.
At the top of the page was a picture of a man with a pronounced quiff and long sideburns, gazing contemplatively into the distance – apparently a relic of the 19th Century, the photograph appeared to have been torn at the bottom.
But Adam was suspicious. “It didn’t look like a normal photo,” he tells me. “It looked like it was edited.”
After he went home, he decided to post about his suspicions on a forum devoted to Wikipedia vandalism.
…
Until recently, if you had searched for “Alan MacMasters” on Wikipedia, you would have found the same article that Adam did. And who would have doubted it?
After all, like most Wikipedia articles, this one was peppered with references: news articles, books and websites that supposedly provided evidence of MacMasters’ life and legacy. As a result, lots of people accepted that MacMasters had been real.
More than a dozen books, published in various languages, named him as the inventor of the toaster. And, until recently, even the Scottish government’s Brand Scotland website listed the electric toaster as an example of the nation’s “innovative and inventive spirit”…
All the while, as the world got to know the supposed Scottish inventor, there was someone in London who could not avoid a smirk as the name “Alan MacMasters” popped up – again and again – on his screen…
For more than a decade, a prankster spun a web of deception about the inventor of the electric toaster: “Alan MacMasters: How the great online toaster hoax was exposed,” from @MarcoLSilva at @BBCNews.
* Mitch Hedberg
###
As we consider the source’s source, we might recall that it was on this date in 1972 that Atari introduced its first product, Pong, which became the world’s first commercially successful video game. Indeed, Pong sparked the beginning of the video game industry, and positioned Atari as its leader (in both arcade and home video gaming) through the early 1980s.
You must be logged in to post a comment.