(Roughly) Daily

Posts Tagged ‘learning

“Memory resides not just in brains but in every cell”*…

An artistic representation of a cell illustrated with two faces merging in its center, surrounded by cellular structures like mitochondria and various organelles, set against a gradient background of soft colors.

As the redoubtable Claire L. Evans [and here] reports, a small but enthusiastic group of neuroscientists is exhuming overlooked experiments and performing new ones to explore whether cells record past experiences — fundamentally challenging our understanding of what memory is…

In 1983, the octogenarian geneticist Barbara McClintock stood at the lectern of the Karolinska Institute in Stockholm. She was famously publicity averse — nearly a hermit — but it’s customary for people to speak when they’re awarded a Nobel Prize, so she delivered a halting account of the experiments that had led to her discovery, in the early 1950s, of how DNA sequences can relocate across the genome. Near the end of the speech, blinking through wire-framed glasses, she changed the subject, asking: “What does a cell know of itself?”

McClintock had a reputation for eccentricity. Still, her question seemed more likely to come from a philosopher than a plant geneticist. She went on to describe lab experiments in which she had seen plant cells respond in a “thoughtful manner.” Faced with unexpected stress, they seemed to adjust in ways that were “beyond our present ability to fathom.” What does a cell know of itself? It would be the work of future biologists, she said, to find out.

Forty years later, McClintock’s question hasn’t lost its potency. Some of those future biologists are now hard at work unpacking what “knowing” might mean for a single cell, as they hunt for signs of basic cognitive phenomena — like the ability to remember and learn — in unicellular creatures and nonneural human cells alike. Science has long taken the view that a multicellular nervous system is a prerequisite for such abilities, but new research is revealing that single cells, too, keep a record of their experiences for what appear to be adaptive purposes.

In a provocative study published in Nature Communications late last year, the neuroscientist Nikolay Kukushkin and his mentor Thomas J. Carew at New York University showed that human kidney cells growing in a dish can “remember” patterns of chemical signals when they’re presented at regularly spaced intervals — a memory phenomenon common to all animals, but unseen outside the nervous system until now. Kukushkin is part of a small but enthusiastic cohort of researchers studying “aneural,” or brainless, forms of memory. What does a cell know of itself? So far, their research suggests that the answer to McClintock’s question might be: much more than you think…

[Evans explains the prevailing wisdom, outlines the experiments that have challenged it, and unpacks (at least some reasons for) resistance to the notion of cellular-scale memory, both sociological and semantic…]

… In neuroscience, [biochemist and neuroscientist Nikolay] Kukushkin writes, the most common definition of memory is that it’s what remains after experience to change future behavior. This is a behavioral definition; the only way to measure it is to observe that future behavior. Think of S. roeselii snapping back into its holdfast, or a lab rat freezing up at the sight of an electrified maze it’s tangled with before. In these cases, how an organism reacts is a clue that prior experience left a lingering trace.

But is a memory only a memory when it’s associated with an external behavior? “It seems like an arbitrary thing to decide,” Kukushkin said. “I understand why it was historically decided to be that, because [behavior] is the thing you can measure easily when you’re working with an animal. I think what happened is that behavior started as something that you could measure, and then it ended up being the definition of memory.”

Behavior tells us that a memory has formed but says nothing about why, how or where. Further, it’s constrained by scale. Take Aplysia californica, a muscular sea slug with enormous neurons (the largest is about the size of a letter on a U.S. penny). Neuroscientists love to conduct memory experiments on Aplysia because its physical responses are easy to measure — poke it and it flinches — and they map cleanly to the handful of sensory and motor neurons involved.

The sea slug, Kukushkin said, can complicate neuroscience’s behavioral bias. Say you shock its tail, triggering a defensive reflex. If you shock it again the next day and find that the defensive reflex is stronger than it was before, that’s behavioral evidence that the slug remembers its initial shock. Any neuroscientist would associate it with a memory.

But what if (apologies to the squeamish) you take that sea slug apart and leave only its immobile neurons? Unlike the intact creature, the neurons can’t retract, so there will be no visible response. Is the memory gone? Certainly not, but without external validation, a behavioral definition of memory breaks down. “We no longer call that a memory,” Kukushkin said. “We call that a mechanism for a memory, we call that synaptic change underlying memory, we call that an analogue of memory. But we don’t call that a memory, and I think that it’s arbitrary.”

Perhaps a definition of memory should extend beyond behavior to encompass more records of the past. A vaccination is a kind of memory. So is a scar, a child, a book. “If you make a footprint, it’s a memory,” Gershman said. An interpretation of memory as a physical event — as a mark made on the world, or on the self — would encompass the biochemical changes that occur within a cell. “Biological systems have evolved to harness those physical processes that retain information and use them for their own purposes,” [cognitive scientist Sam] Gershman said.

So, what does a cell know of itself? Perhaps a better version of Barbara McClintock’s question is: What can a cell remember? When it comes to survival, what a cell knows of itself isn’t as important as what it knows of the world: how it incorporates information about its experiences to determine when to bend, when to battle and when to make a break for it.

A cell preserves the information that preserves its existence. And in a sense, so do we. As today’s cellular memory researchers revisit abandoned experimental threads from the past, they too are discovering what memory owes to its context, how science’s sociological environment can determine which ideas are preserved and which are forgotten. It’s almost as though a field is waking up from a 50-year spell of amnesia. Fortunately, the memories are flooding back…

What Can a Cell Remember?” from @theuniverse.bsky.social‬ in @quantamagazine.bsky.social‬.

For more on the work that got Barbara McClintock onto the Nobel podium see here.

And, also apposite, a pair of cautionary historical examples of scientists who followed Jean-Baptiste Lamarck, who argued in the early 19th century that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime– that’s to say that learning (a kind of memory) is heritable… and went astray: Lysenko and Kammerer.

* James Gleick, The Information

###

As we muse on memory (and note that one cannot remember– and learn from– what one cannot know), we might recall that it was on this date in 1735 that New York Weekly Journal publisher and writer John Peter Zenger was acquitted of seditious libel against the royal governor of New York, William Cosby, on the basis that what he had published was true.

In 1733, Zenger had begun printing The New York Weekly Journal, voicing opinions critical of the colonial governor.  On November 17, 1734, on Cosby’s orders, the sheriff arrested Zenger. After a grand jury refused to indict him, the Attorney General Richard Bradley charged him with libel. Zenger’s lawyers, Andrew Hamilton and William Smith, Sr., successfully argued that truth is a defense against charges of libel… and Zenger became a symbol for freedom of the press.

An illustration depicting a courtroom scene with a judge, lawyers, and an audience, capturing the atmosphere of a historical trial.
Andrew Hamilton defending John Peter Zenger in court, 1734–1735 (source)

“Nature doesn’t feel compelled to stick to a mathematically precise algorithm; in fact, nature probably can’t stick to an algorithm.”*…

Just over 30 years ago, my GBN partner Stewart Brand and I were discussing the then-new web affordance Pointcast, an active screensaver that displayed news and other information tailored to a user’s expressed interests and delivered live over the Internet. It was big news at the time; and while it failed, it prefigured the emergence of the algorithms that today feed “preferences” that we don’t even need (nor for that matter have the opportunity) to articlulate.

The problem, we mused, is that a system like that becomes a trap, one that (by simply satisfying expressed desires) impicitly works against discovery of the altogether new, of the thing we didn’t yet know might interest (or benefit) us. A system like that pulls us more deeply into holes instead of helping us explore broader horizons– it is biased against discovery, against learning (in its broadest sense). Our most important discoveries are often the books somewhere on the library shelp near the one we were seeking, the article in the (old print) newpaper next to the one to which we were intially drawn.

The answer, we imagined, wasn’t to skip such systems altogether; they can play a useful role; rather, it was to introduce a complementary “dial-up randomness”– to create ways to feed ourselves a stream of surprises.

Benj Edwards reports on just such an affordance…

[Recently] a New York-based app developer named Isaac Gemal [here] debuted a new site called WikiTok, where users can vertically swipe through an endless stream of Wikipedia article stubs in a manner similar to the interface for video-sharing app TikTok.

It’s a neat way to stumble upon interesting information randomly, learn new things, and spend spare moments of boredom without reaching for an algorithmically addictive social media app. Although to be fair, WikiTok is addictive in its own way, but without an invasive algorithm tracking you and pushing you toward the lowest-common-denominator content. It’s also thrilling because you never know what’s going to pop up next.

WikiTok, which works through mobile and desktop browsers, feeds visitors a random list of Wikipedia articles—culled from the Wikipedia API—into a vertically scrolling interface. Despite the name that hearkens to TikTok, there are currently no videos involved. Each entry is accompanied by an image pulled from the corresponding article. If you see something you like, you can tap “Read More,” and the full Wikipedia page on the topic will open in your browser.

For now, the feed is truly random, and Gemal is currently resisting calls to automatically tailor the stream of articles to the user’s interests based on what they express interest in.

“I have had plenty of people message me and even make issues on my GitHub asking for some insane crazy WikiTok algorithm,” Gemal told Ars. “And I had to put my foot down and say something along the lines that we’re already ruled by ruthless, opaque algorithms in our everyday life; why can’t we just have one little corner in the world without them?”

The breadth of topics you’ll encounter on WikiTok is staggering, owing to the wide range of knowledge that Wikipedia covers…

… Gemal posted the code for WikiTok on GitHub, so anyone can modify or contribute to the project. Right now, the web app supports 14 languages, article previews, and article sharing on both desktop and mobile browsers. New features may arrive as contributors add them. It’s based on a tech stack that includes React 18, TypeScript, Tailwind CSS, and Vite.

And so far, he is sticking to his vision of a free way to enjoy Wikipedia without being tracked and targeted. “I have no grand plans for some sort of insane monetized hyper-calculating TikTok algorithm,” Gemal told us. “It is anti-algorithmic, if anything.”

WikiTok cures boredom in spare moments with wholesome swipe-ups: “Developer creates endless Wikipedia feed to fight algorithm addiction,” @benjedwards.com in @arstechnica.com.

Margaret Wertheim

###

As we supersize serendipity, we might recall that it was on this date in 1967 that a remarkably warm and open new neighbor moved into the neighborhood: Misteroger’s Neighborhood premeired nationally on public television stations.

Fred McFeely Rogers was born in Latrobe, Pennsylvania on March 20, 1928. After earning his bachelor’s degree in music from Rollins College in 1951, he began working for NBC for a short time in New York. In 1953, he began working at the new public television station WQED for the show, The Children’s Corner where he learned that wearing sneakers were a lot quieter on the set than his dress shoes.

In 1961, Rogers moved to Toronto, Ontario to work on a new 15-minute show called Misterogers for CBC Television. In 1966, Rogers went back to WQED to create Misteroger’s Neighborhood.

In 1970, the show was renamed Mister Rogers’ Neighborhood. The series ended again in 1976 but was picked up three years later when Rogers felt as if his work speaking to children wasn’t done. The show continued from 1979 through 2001. Mr. Rogers passed away on February 27, 2003.

In 2011, PBS created an animated “spinoff” of the show called Daniel Tiger’s Neighborhood featuring the characters Rogers had created in his “land of make-believe”; and in 2019, Tom Hanks portrayed Rogers in the film, A Beautiful Day in the Neighborhood,” a role that earned him an Oscar nomination.

source

Written by (Roughly) Daily

February 19, 2025 at 1:00 am

“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.”*…

Rings for sale in the Grand Bazaar, Istanbul, November 2024

The end of the year approaches, and thoughts turn to retrospectives. In what has become a (Roughly) Daily tradition, today’s edition features a year-end recap from the estimable Tom Whitwell, who shares a full deck of fascinating things he learned in 2024. For example…

6. The London Underground has a distinct form of mosquito, Culex pipiens f. Molestus, genetically different from above-ground mosquitos, and present since at least the 1940s. [Katharine Byrne & Richard A Nichols]

7. Ozempic is a modified, synthetic version of a protein discovered in the venomous saliva of the Gila monster, a large, sluggish lizard native to the United States. [Scott Alexander]

22. In 2022, 55% of Macy’s income came from credit cards rather than retail sales. That’s fairly normal for US department stores. [Pan Kwan Yuk]

29. You can buy 200 real human molars for $900. [B for Bones, via Lauren]

32. In 1800, 1 in 3 people on earth were Chinese. Today, it’s less than 1 in 5. [Our World in Data, via Boyan Slat]

42. n the 2020s, over 16% of movies have colons in the title (Like Spider-Man: Homecoming), up almost 300% since the 1990s. [Daniel Parris]

46. Between the 1920s and 1950s, millions of ‘enemies of the people’ — often educated elites — were sent to prison camps in the Soviet Union. Today, the areas around those camps are more prosperous and productive than similar areas. [Toews & Vézina]

Many more fascinating factoids at: “52 things I learned in 2024,” from @TomWhitwell.

Previous lists: 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023… and sprinkled throughout the December postings in (R)D over the years.

Dr. Seuss

###

As we forage, we might recall that on this date in 1968 Marvin Gaye’s version of “I Heard It Through the Grapevine” was in the middle of its seven-week occupancy of the #1 spot on Billboard’s Hot 100.

A year earlier, Gladys Knight and the Pips had had a hit with the tune (#1 on the R&B chart; #2 on the Hot 100). Gaye’s version overtook its predecessor and became the biggest hit single on the Motown family of labels up to that point. The Gaye recording has since become an acclaimed soul classic. In 1998 the song was inducted into the Grammy Hall of Fame for “historical, artistic and significant” value.

Written by (Roughly) Daily

December 14, 2024 at 1:00 am

“We are not what we know but what we are willing to learn”*…

Abigail Tulenko argues that folktales, like formal philosophy, unsettle us into thinking anew about our cherished values and views of the world…

The Hungarian folktale Pretty Maid Ibronka terrified and tantalised me as a child. In the story, the young Ibronka must tie herself to the devil with string in order to discover important truths. These days, as a PhD student in philosophy, I sometimes worry I’ve done the same. I still believe in philosophy’s capacity to seek truth, but I’m conscious that I’ve tethered myself to an academic heritage plagued by formidable demons.

The demons of academic philosophy come in familiar guises: exclusivity, hegemony and investment in the myth of individual genius. As the ethicist Jill Hernandez notes, philosophy has been slower to change than many of its sister disciplines in the humanities: ‘It may be a surprise to many … given that theology and, certainly, religious studies tend to be inclusive, but philosophy is mostly resistant toward including diverse voices.’ Simultaneously, philosophy has grown increasingly specialised due to the pressures of professionalisation. Academics zero in on narrower and narrower topics in order to establish unique niches and, in the process, what was once a discipline that sought answers to humanity’s most fundamental questions becomes a jargon-riddled puzzle for a narrow group of insiders.

In recent years, ‘canon-expansion’ has been a hot-button topic, as philosophers increasingly find the exclusivity of the field antithetical to its universal aspirations. As Jay Garfield remarks, it is as irrational ‘to ignore everything not written in the Eurosphere’ as it would be to ‘only read philosophy published on Tuesdays.’ And yet, academic philosophy largely has done just that. It is only in the past few decades that the mainstream has begun to engage seriously with the work of women and non-Western thinkers. Often, this endeavour involves looking beyond the confines of what, historically, has been called ‘philosophy’.

Expanding the canon generally isn’t so simple as resurfacing a ‘standard’ philosophical treatise in the style of white male contemporaries that happens to have been written by someone outside this demographic. Sometimes this does happen, as in the case of Margaret Cavendish (1623-73) whose work has attracted increased recognition in recent years. But Cavendish was the Duchess of Newcastle, a royalist whose political theory criticises social mobility as a threat to social order. She had access to instruction that was highly unusual for women outside her background, which lends her work a ‘standard’ style and structure. To find voices beyond this elite, we often have to look beyond this style and structure.

Texts formerly classified as squarely theological have been among the first to attract significant renewed interest. Female Catholic writers such as Teresa of Ávila or Sor Juana Inés de la Cruz, whose work had been largely ignored outside theological circles, are now being re-examined through a philosophical lens. Likewise, philosophy departments are gradually including more work by Buddhist philosophers such as Dignāga and Ratnakīrti, whose epistemological contributions have been of especial recent interest. Such thinkers may now sit on syllabi alongside Augustine or Aquinas who, despite their theological bent, have long been considered ‘worthy’ of philosophical engagement.

On the topic of ‘worthiness’, I am wary of using the term ‘philosophy’ as an honorific. It is crucial that our interest in expanding the canon does not involve the implication that the ‘philosophical’ confers a degree of rigour over the theological, literary, etc. To do so would be to engage in a myopic and uninteresting debate over academic borders. My motivating question is not what the label of ‘philosophy’ can confer upon these texts, but what these texts can bring to philosophy. If philosophy seeks insight into the nature of such universal topics as reality, morality, art and knowledge, it must seek input from those beyond a narrow few. Engaging with theology is a great start, but these authors still largely represent an elite literate demographic, and raise many of the same concerns regarding a hegemonic, exclusive and individualistic bent.

As Hernandez quips: ‘[W]e know white, Western men have not cornered the market on deeply human, philosophical questions.’ And furthermore, ‘we also know, prudentially, that philosophy as a discipline needs to (and must) undergo significant navel-gazing to survive … in an ever-increasingly difficult time for homogenous, exclusive academic disciplines.’ In light of our aforementioned demons, it appears that philosophy is in urgent need of an exorcism.

I propose that one avenue forward is to travel backward into childhood – to stories like Ibronka’s. Folklore is an overlooked repository of philosophical thinking from voices outside the traditional canon. As such, it provides a model for new approaches that are directly responsive to the problems facing academic philosophy today. If, like Ibronka, we find ourselves tied to the devil, one way to disentangle ourselves may be to spin a tale…

Wisdom is where we find it: “Folklore is philosophy,” in @aeonmag. Eminently worth reading in full.

Apposite: “Syncretic Past.”

* Mary Catherine Bateson

###

As we update our understanding of understanding, we might send thoughtful birthday greetings to Michael Sandel; he was born on this date in 1953. A philosopher and professor of government theory at Harvard Law School (where his course Justice was the university’s first course to be made freely available online and on television, seen so far by tens of millions of people around the world), he is probably best known for his critique of John Rawls‘ A Theory of Justice (in Sandel’s book, Liberalism and the Limits of Justice).

Sandel subscribes to a certain version of communitarianism (although he is uncomfortable with the label), and in this vein he is perhaps best known for his critique of John Rawls’s A Theory of Justice. Rawls’s argument depends on the assumption of the veil of ignorance, which Sandel argues commits Rawls to a view of people as “unencumbered selves”. Sandel’s view is that we are by nature encumbered to an extent that makes it impossible even hypothetically to have such a veil. Some examples of such ties are those with our families, which we do not make by conscious choice but are born with, already attached. Because they are not consciously acquired, it is impossible to separate oneself from such ties. Sandel believes that only a less-restrictive, looser version of the veil of ignorance should be postulated. Criticism such as Sandel’s inspired Rawls to subsequently argue that his theory of justice was not a “metaphysical” theory but a “political” one, a basis on which an overriding consensus could be formed among individuals and groups with many different moral and political views.

source

source

“One thing I’ve learned over time is, if you hit a golf ball into water, it won’t float”*…

Happy New Year!

In the spirit of Tom Whitwell’s lists, Jason Kottke‘s collection of learnings from 2023-gone-by…

Purple Heart medals that were made for the planned (and then cancelled) invasion of Japan in 1945 are still being given out to wounded US military personnel.

The San Francisco subway system still runs on 5 1/4-inch floppies.

Bottled water has an expiration date — it’s the bottle not the water that expires.

Multicellular life developed on Earth more than 25 separate times.

Horseshoe crabs are older than Saturn’s rings.

Ernest Hemingway only used 59 exclamation points across his entire collection of works.

MLB broadcaster Vin Scully’s career lasted 67 seasons, during which he called a game managed by Connie Mack (born in 1862) and one Julio Urías (born in 1996) played in.

Almost 800,000 Maryland licence plates include a URL that now points to an online casino in the Philippines because someone let the domain registration lapse.

Dozens more at: “52 Interesting Things I Learned in 2023.”

* Arnold Palmer

###

As we live and learn, we might spare a thought for Grace Brewster Murray Hopper; she died on this date in 1992.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

Source

Written by (Roughly) Daily

January 1, 2024 at 1:00 am