(Roughly) Daily

Posts Tagged ‘learning

“You live and learn. At any rate, you live.”*…

… and to the extent that we care about our democracy, that’s an issue.

In an article based on his recent Sakurada-Kai Foundation Oxbridge Lecture at Keio University, Tokyo, John Dunn argues that our democracies depend on our picking up the pace of learning. The abstract:

There cannot be a coherent democratic theory because democracy is not a determinate topic. Representative democracy is a relatively modern regime form. It now needs rehabilitation because so many instances have performed poorly for so long. Representative democracy is now also an aging regime. As a type of state, it is subject to the territorial contentiousness and contested legitimacy of any state. It claims its legitimacy from iterative popular choice, but the plausibility of that claim is increasingly strained by the drastic disparities in life chances reproduced through the property systems it protects. The inherent difficulty for citizens to judge how to advance their collective interests is aggravated by the recent transformation of the information economy. In the cumulative damage inflicted by climate change it faces a deadlier peril than any previous regime and one which only a citizenry that can enlighten itself in time can reasonably hope to nerve itself to meet…

There follows a fascinating– and provocative– elaboration of this thesis in which Dunn considers the history of democracy and the alternatives with which it has, since its inception, vied. He concludes in a bracing fashion…

… The varieties of autocracy which will be on offer wherever the rest of the world has the opportunity to take them up will be without exception the reverse of enlightened – instrumentally and compulsively bound to the extremes of obscurantism, Darkness as a full-on fideist commitment, deliberate self-blinding as a navigational strategy. Move fast, break lots, and never pause to inspect the wreckage.

Representative democracy has recently proved itself a poor structure for collective enlightenment, but the case for it depends on its at least not precluding that, its being still open to making the attempt, and responding to what it can contrive to learn. The most optimistic vision of democracy in action has always seen it as an opportunity for collective self-education on the content of shared goods and the means to achieve them. If that is scarcely a realist picture of what it has ever been, at least it is an image of the right shape. It is too late to ask who will educate the educators. At this point we must educate ourselves together and heed the lessons of that education or we must and will die – not just each of us one by one, as we were always fated to do, but soon enough all of us and for ever…

Eminently worth reading in full: “Can Democracy be Rehabilitated?

Apposite: “How American Democracy Fell So Far Behind,” from Steven Levitsky and Daniel Ziblatt (gift article– and source of the image above)

* Douglas Adams, Mostly Harmless

###

As we devote ourselves to democracy, we might spare a thought for Ludwig van Beethoven; he died on this date in 1827. A crucial figure in the transition between the Classical and Romantic eras in Western music, he remains one of the most famous and influential of all composers. His best-known compositions include 9 symphonies, 5 concertos for piano, 32 piano sonatas, and 16 string quartets. He also composed other chamber music, choral works (including the celebrated Missa Solemnis), a single opera (Fidelio), and numerous songs.

Relevantly to the piece above…

Beethoven admired the ideals of the French Revolution, so he dedicated his third symphony to Napoleon Bonaparte… until Napoleon declared himself emperor. Beethoven then sprung into a rage, ripped the front page from his manuscript and scrubbed out Napoleon’s name…

Beethoven’s temper and Symphony No. 3 ‘Eroica’
Beethoven’s dedication in his manuscript of Symphony No. 3, after his “revision” (source)

Written by (Roughly) Daily

March 26, 2026 at 1:00 am

“Tell me to what you pay attention and I will tell you who you are”*…

A man wearing a gas mask operates a device at a wooden table, with letters L, A, M, F, and E visible on the table. Equipment and hoses are connected to the device.
A test subject has his oxygen consumption measured while using Walter R. Miles’ Pursuitmeter, as pictured in the inventor’s 1921 article for the Journal of Experimental PsychologySource.

Before the attention economy consumed our lives, “pursuit tests” devised by the US military coupled man to machine with the aim of assessing focus under pressure. D. Graham Burnett explores these devices for evaluating aviators, finding a pre-history of the laboratory research that has relentlessly worked to slice and dice the attentional powers of human beings…

We worry about our attention these days — nearly all of us. There is something. . . wrong. We cannot manage to do what we want to do with our eyes and minds — not for long, anyway. We keep coming back to the machines, to the screens, to the notifications, to the blinking cursor and the frictionless swipe that renews the feed.

An ethnographer from Mars, moving among us (would we even notice?), might have trouble understanding our complaint: “Trouble with their attention? They stare at small slabs of versicolor glass all day! Their attentive powers are. . . sublime!”

And that misunderstanding rather sharpens the point: we don’t have any problem at all with the forms of attention that involve remaining engaged with, and responsive to, machines. We are amazing at the click and tap of durational vigilance to this or that stimulus, presented at the business end of a complex device. Our uncanny and immersive cybernetic attention is a defining characteristic of the age. Our human attention — our ability to be with ourselves and with others, our ability to receive the world with our minds and senses, our ability to daydream, read a book uninterrupted, or watch a sunset — well, many of us are finding it increasingly difficult to remember what that might even mean.

This isn’t really an accident. Over the last century or so, a series of elaborate programs of laboratory research have worked to slice and dice the attentional powers of human beings. Their aim? To understand the operational capacities of those who would be asked to shoot down airplanes, monitor radar screens, and otherwise sit at the controls of large and expensive machines. Seated in front of countless instruments, experimental subjects were asked to listen and look, to track and trigger. Psychologists stood by with stopwatches, quantifying our cybernetic capacities, and seeking ways to extend them. For those of us who have come of age in the fluorescence of the “attention economy”, it is interesting to look back and try to catch glimpses of the way that the movement of human eyeballs came under precise scrutiny, the way that machine vigilance became a field of study. We know now that the mechanomorphic attention dissected in those laboratories is the machine attention that is relentlessly priced in our online lives — to deleterious effects.

You could say that this process began with the fascinating and now mostly forgotten tool known as the “pursuit test”. Part steampunk videogame, part laboratory snuff-flick, the pursuit test staged and restaged the integration of man and machine across the first decades of the twentieth century…

Fascinating– and timely: “Cybernetic Attention– All Watched over by Machines We Learned to Watch,” from @publicdomainrev.bsky.social. Eminently worth reading in full.

* Jose Ortega y Gasset

###

As we untangle engagement, we might send thoughtful birthday greetings to a man whose work influenced the endeavors described in the piece featured above, Hermann Ebbinghaus; he was born on this date in 1850. A psychologist, he pioneered the experimental study of memory and discovered the learning curve, the forgetting curve, and the spacing effect.

Black and white portrait of a man with a large beard, wearing round glasses and a formal suit, looking directly at the camera.

source

“Memory resides not just in brains but in every cell”*…

An artistic representation of a cell illustrated with two faces merging in its center, surrounded by cellular structures like mitochondria and various organelles, set against a gradient background of soft colors.

As the redoubtable Claire L. Evans [and here] reports, a small but enthusiastic group of neuroscientists is exhuming overlooked experiments and performing new ones to explore whether cells record past experiences — fundamentally challenging our understanding of what memory is…

In 1983, the octogenarian geneticist Barbara McClintock stood at the lectern of the Karolinska Institute in Stockholm. She was famously publicity averse — nearly a hermit — but it’s customary for people to speak when they’re awarded a Nobel Prize, so she delivered a halting account of the experiments that had led to her discovery, in the early 1950s, of how DNA sequences can relocate across the genome. Near the end of the speech, blinking through wire-framed glasses, she changed the subject, asking: “What does a cell know of itself?”

McClintock had a reputation for eccentricity. Still, her question seemed more likely to come from a philosopher than a plant geneticist. She went on to describe lab experiments in which she had seen plant cells respond in a “thoughtful manner.” Faced with unexpected stress, they seemed to adjust in ways that were “beyond our present ability to fathom.” What does a cell know of itself? It would be the work of future biologists, she said, to find out.

Forty years later, McClintock’s question hasn’t lost its potency. Some of those future biologists are now hard at work unpacking what “knowing” might mean for a single cell, as they hunt for signs of basic cognitive phenomena — like the ability to remember and learn — in unicellular creatures and nonneural human cells alike. Science has long taken the view that a multicellular nervous system is a prerequisite for such abilities, but new research is revealing that single cells, too, keep a record of their experiences for what appear to be adaptive purposes.

In a provocative study published in Nature Communications late last year, the neuroscientist Nikolay Kukushkin and his mentor Thomas J. Carew at New York University showed that human kidney cells growing in a dish can “remember” patterns of chemical signals when they’re presented at regularly spaced intervals — a memory phenomenon common to all animals, but unseen outside the nervous system until now. Kukushkin is part of a small but enthusiastic cohort of researchers studying “aneural,” or brainless, forms of memory. What does a cell know of itself? So far, their research suggests that the answer to McClintock’s question might be: much more than you think…

[Evans explains the prevailing wisdom, outlines the experiments that have challenged it, and unpacks (at least some reasons for) resistance to the notion of cellular-scale memory, both sociological and semantic…]

… In neuroscience, [biochemist and neuroscientist Nikolay] Kukushkin writes, the most common definition of memory is that it’s what remains after experience to change future behavior. This is a behavioral definition; the only way to measure it is to observe that future behavior. Think of S. roeselii snapping back into its holdfast, or a lab rat freezing up at the sight of an electrified maze it’s tangled with before. In these cases, how an organism reacts is a clue that prior experience left a lingering trace.

But is a memory only a memory when it’s associated with an external behavior? “It seems like an arbitrary thing to decide,” Kukushkin said. “I understand why it was historically decided to be that, because [behavior] is the thing you can measure easily when you’re working with an animal. I think what happened is that behavior started as something that you could measure, and then it ended up being the definition of memory.”

Behavior tells us that a memory has formed but says nothing about why, how or where. Further, it’s constrained by scale. Take Aplysia californica, a muscular sea slug with enormous neurons (the largest is about the size of a letter on a U.S. penny). Neuroscientists love to conduct memory experiments on Aplysia because its physical responses are easy to measure — poke it and it flinches — and they map cleanly to the handful of sensory and motor neurons involved.

The sea slug, Kukushkin said, can complicate neuroscience’s behavioral bias. Say you shock its tail, triggering a defensive reflex. If you shock it again the next day and find that the defensive reflex is stronger than it was before, that’s behavioral evidence that the slug remembers its initial shock. Any neuroscientist would associate it with a memory.

But what if (apologies to the squeamish) you take that sea slug apart and leave only its immobile neurons? Unlike the intact creature, the neurons can’t retract, so there will be no visible response. Is the memory gone? Certainly not, but without external validation, a behavioral definition of memory breaks down. “We no longer call that a memory,” Kukushkin said. “We call that a mechanism for a memory, we call that synaptic change underlying memory, we call that an analogue of memory. But we don’t call that a memory, and I think that it’s arbitrary.”

Perhaps a definition of memory should extend beyond behavior to encompass more records of the past. A vaccination is a kind of memory. So is a scar, a child, a book. “If you make a footprint, it’s a memory,” Gershman said. An interpretation of memory as a physical event — as a mark made on the world, or on the self — would encompass the biochemical changes that occur within a cell. “Biological systems have evolved to harness those physical processes that retain information and use them for their own purposes,” [cognitive scientist Sam] Gershman said.

So, what does a cell know of itself? Perhaps a better version of Barbara McClintock’s question is: What can a cell remember? When it comes to survival, what a cell knows of itself isn’t as important as what it knows of the world: how it incorporates information about its experiences to determine when to bend, when to battle and when to make a break for it.

A cell preserves the information that preserves its existence. And in a sense, so do we. As today’s cellular memory researchers revisit abandoned experimental threads from the past, they too are discovering what memory owes to its context, how science’s sociological environment can determine which ideas are preserved and which are forgotten. It’s almost as though a field is waking up from a 50-year spell of amnesia. Fortunately, the memories are flooding back…

What Can a Cell Remember?” from @theuniverse.bsky.social‬ in @quantamagazine.bsky.social‬.

For more on the work that got Barbara McClintock onto the Nobel podium see here.

And, also apposite, a pair of cautionary historical examples of scientists who followed Jean-Baptiste Lamarck, who argued in the early 19th century that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime– that’s to say that learning (a kind of memory) is heritable… and went astray: Lysenko and Kammerer.

* James Gleick, The Information

###

As we muse on memory (and note that one cannot remember– and learn from– what one cannot know), we might recall that it was on this date in 1735 that New York Weekly Journal publisher and writer John Peter Zenger was acquitted of seditious libel against the royal governor of New York, William Cosby, on the basis that what he had published was true.

In 1733, Zenger had begun printing The New York Weekly Journal, voicing opinions critical of the colonial governor.  On November 17, 1734, on Cosby’s orders, the sheriff arrested Zenger. After a grand jury refused to indict him, the Attorney General Richard Bradley charged him with libel. Zenger’s lawyers, Andrew Hamilton and William Smith, Sr., successfully argued that truth is a defense against charges of libel… and Zenger became a symbol for freedom of the press.

An illustration depicting a courtroom scene with a judge, lawyers, and an audience, capturing the atmosphere of a historical trial.
Andrew Hamilton defending John Peter Zenger in court, 1734–1735 (source)

“Nature doesn’t feel compelled to stick to a mathematically precise algorithm; in fact, nature probably can’t stick to an algorithm.”*…

Just over 30 years ago, my GBN partner Stewart Brand and I were discussing the then-new web affordance Pointcast, an active screensaver that displayed news and other information tailored to a user’s expressed interests and delivered live over the Internet. It was big news at the time; and while it failed, it prefigured the emergence of the algorithms that today feed “preferences” that we don’t even need (nor for that matter have the opportunity) to articlulate.

The problem, we mused, is that a system like that becomes a trap, one that (by simply satisfying expressed desires) impicitly works against discovery of the altogether new, of the thing we didn’t yet know might interest (or benefit) us. A system like that pulls us more deeply into holes instead of helping us explore broader horizons– it is biased against discovery, against learning (in its broadest sense). Our most important discoveries are often the books somewhere on the library shelp near the one we were seeking, the article in the (old print) newpaper next to the one to which we were intially drawn.

The answer, we imagined, wasn’t to skip such systems altogether; they can play a useful role; rather, it was to introduce a complementary “dial-up randomness”– to create ways to feed ourselves a stream of surprises.

Benj Edwards reports on just such an affordance…

[Recently] a New York-based app developer named Isaac Gemal [here] debuted a new site called WikiTok, where users can vertically swipe through an endless stream of Wikipedia article stubs in a manner similar to the interface for video-sharing app TikTok.

It’s a neat way to stumble upon interesting information randomly, learn new things, and spend spare moments of boredom without reaching for an algorithmically addictive social media app. Although to be fair, WikiTok is addictive in its own way, but without an invasive algorithm tracking you and pushing you toward the lowest-common-denominator content. It’s also thrilling because you never know what’s going to pop up next.

WikiTok, which works through mobile and desktop browsers, feeds visitors a random list of Wikipedia articles—culled from the Wikipedia API—into a vertically scrolling interface. Despite the name that hearkens to TikTok, there are currently no videos involved. Each entry is accompanied by an image pulled from the corresponding article. If you see something you like, you can tap “Read More,” and the full Wikipedia page on the topic will open in your browser.

For now, the feed is truly random, and Gemal is currently resisting calls to automatically tailor the stream of articles to the user’s interests based on what they express interest in.

“I have had plenty of people message me and even make issues on my GitHub asking for some insane crazy WikiTok algorithm,” Gemal told Ars. “And I had to put my foot down and say something along the lines that we’re already ruled by ruthless, opaque algorithms in our everyday life; why can’t we just have one little corner in the world without them?”

The breadth of topics you’ll encounter on WikiTok is staggering, owing to the wide range of knowledge that Wikipedia covers…

… Gemal posted the code for WikiTok on GitHub, so anyone can modify or contribute to the project. Right now, the web app supports 14 languages, article previews, and article sharing on both desktop and mobile browsers. New features may arrive as contributors add them. It’s based on a tech stack that includes React 18, TypeScript, Tailwind CSS, and Vite.

And so far, he is sticking to his vision of a free way to enjoy Wikipedia without being tracked and targeted. “I have no grand plans for some sort of insane monetized hyper-calculating TikTok algorithm,” Gemal told us. “It is anti-algorithmic, if anything.”

WikiTok cures boredom in spare moments with wholesome swipe-ups: “Developer creates endless Wikipedia feed to fight algorithm addiction,” @benjedwards.com in @arstechnica.com.

Margaret Wertheim

###

As we supersize serendipity, we might recall that it was on this date in 1967 that a remarkably warm and open new neighbor moved into the neighborhood: Misteroger’s Neighborhood premeired nationally on public television stations.

Fred McFeely Rogers was born in Latrobe, Pennsylvania on March 20, 1928. After earning his bachelor’s degree in music from Rollins College in 1951, he began working for NBC for a short time in New York. In 1953, he began working at the new public television station WQED for the show, The Children’s Corner where he learned that wearing sneakers were a lot quieter on the set than his dress shoes.

In 1961, Rogers moved to Toronto, Ontario to work on a new 15-minute show called Misterogers for CBC Television. In 1966, Rogers went back to WQED to create Misteroger’s Neighborhood.

In 1970, the show was renamed Mister Rogers’ Neighborhood. The series ended again in 1976 but was picked up three years later when Rogers felt as if his work speaking to children wasn’t done. The show continued from 1979 through 2001. Mr. Rogers passed away on February 27, 2003.

In 2011, PBS created an animated “spinoff” of the show called Daniel Tiger’s Neighborhood featuring the characters Rogers had created in his “land of make-believe”; and in 2019, Tom Hanks portrayed Rogers in the film, A Beautiful Day in the Neighborhood,” a role that earned him an Oscar nomination.

source

Written by (Roughly) Daily

February 19, 2025 at 1:00 am

“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.”*…

Rings for sale in the Grand Bazaar, Istanbul, November 2024

The end of the year approaches, and thoughts turn to retrospectives. In what has become a (Roughly) Daily tradition, today’s edition features a year-end recap from the estimable Tom Whitwell, who shares a full deck of fascinating things he learned in 2024. For example…

6. The London Underground has a distinct form of mosquito, Culex pipiens f. Molestus, genetically different from above-ground mosquitos, and present since at least the 1940s. [Katharine Byrne & Richard A Nichols]

7. Ozempic is a modified, synthetic version of a protein discovered in the venomous saliva of the Gila monster, a large, sluggish lizard native to the United States. [Scott Alexander]

22. In 2022, 55% of Macy’s income came from credit cards rather than retail sales. That’s fairly normal for US department stores. [Pan Kwan Yuk]

29. You can buy 200 real human molars for $900. [B for Bones, via Lauren]

32. In 1800, 1 in 3 people on earth were Chinese. Today, it’s less than 1 in 5. [Our World in Data, via Boyan Slat]

42. n the 2020s, over 16% of movies have colons in the title (Like Spider-Man: Homecoming), up almost 300% since the 1990s. [Daniel Parris]

46. Between the 1920s and 1950s, millions of ‘enemies of the people’ — often educated elites — were sent to prison camps in the Soviet Union. Today, the areas around those camps are more prosperous and productive than similar areas. [Toews & Vézina]

Many more fascinating factoids at: “52 things I learned in 2024,” from @TomWhitwell.

Previous lists: 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023… and sprinkled throughout the December postings in (R)D over the years.

Dr. Seuss

###

As we forage, we might recall that on this date in 1968 Marvin Gaye’s version of “I Heard It Through the Grapevine” was in the middle of its seven-week occupancy of the #1 spot on Billboard’s Hot 100.

A year earlier, Gladys Knight and the Pips had had a hit with the tune (#1 on the R&B chart; #2 on the Hot 100). Gaye’s version overtook its predecessor and became the biggest hit single on the Motown family of labels up to that point. The Gaye recording has since become an acclaimed soul classic. In 1998 the song was inducted into the Grammy Hall of Fame for “historical, artistic and significant” value.

Written by (Roughly) Daily

December 14, 2024 at 1:00 am