(Roughly) Daily

Posts Tagged ‘learning

“Damn everything but the circus!”*…

Acro-balancing in Circus and Philosophy at the University of Kentucky, fall 2017

Meg Wallace, of the University of Kentucky, teaches the philosophy course that I wish I’d taken…

The circus is ridiculous. Or: most people think it’s ridiculous. We even express our disdain for disorganized, poorly run groups by claiming, disparagingly, that such entities are “run like a circus.” (This isn’t true, of course. The amount of organization, discipline, and hard work that it takes to run a circus is mind-blowingly impressive.) But this is one reason why I teach Circus and Philosophy. I want to show students a new way into philosophy – through doing ridiculous things.

 It seems strange that philosophers often teach philosophy of art, philosophy of sport, philosophy of the performing arts, and so on, without having the students at least minimally participate in the activities or artforms that are being philosophized about. This lack of first-person engagement is especially unfortunate when the topic at hand crucially involves the perspective of the participant– the painter, the dancer, the actor, the aerialist, the clown. Circus and Philosophy is an attempt to explore this participation/theorizing gap. (Another aim is just to magic-trick undergrads into loving philosophy.)

[The circus is] rich with potential for deep discussions about an array of philosophical topics in aesthetics, ethics, social and political philosophy, personal identity, mind, metaphysics, epistemology, and so on. It is also intrinsically interdisciplinary, so students with interests and majors outside of philosophy can easily find a way in…

Finding the profound in the profane: “Circus and Philosophy: Teaching Aristotle through Juggling.”

* e e cummings

###

As we benefit from the big top, we might recall that it was on this date in 1987 that another instructive family of entertainers, The Simpsons, made their debut on television in “Good Night,” the first of 48 shorts that aired on The Tracey Ullman Show before the characters were given their own eponymously-titled show– now the longest-running scripted series in U.S. television history.

Written by (Roughly) Daily

April 19, 2022 at 1:00 am

“Artificial intelligence is growing up fast”*…

A simple prototype system sidesteps the computing bottleneck in tuning– teaching– artificial intelligence algorithms…

A simple electrical circuit [pictured above] has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”…

More at “Simple electrical circuit learns on its own—with no help from a computer, from @ScienceMagazine.

* Diane Ackerman

###

As we brace ourselves (and lest we doubt the big things can grow from humble beginnings like these), we might recall that it was on this date in 1959 that Texas Instruments (TI) demonstrated the first working integrated circuit (IC), which had been invented by Jack Kilby. Kilby created the device to prove that resistors and capacitors could exist on the same piece of semiconductor material. His circuit consisted of a sliver of germanium with five components linked by wires. It was Fairchild’s Robert Noyce, however, who filed for a patent within months of Kilby and who made the IC a commercially-viable technology. Both men are credited as co-inventors of the IC. (Kilby won the Nobel Prize for his work in 2000; Noyce, who died in 1990, did not share.)

Kilby and his first IC (source)

“Learning never exhausts the mind”*…

As regular readers know, each year Tom Whitwell shares a list of the more intriguing things he’s learned over the year; happily, 2021 is no exception…

10% of US electricity is generated from old Russian nuclear warheads. [Geoff Brumfiel]

The entire global cosmetic Botox industry is supported by an annual production of just a few milligrams of botulism toxin. Pure toxin would cost ~$100 trillion per kilogram. [Anthony Warner]

Wearing noise cancelling headphones in an open-plan office helps a little bit — reducing cognitive errors by 14% — but actual silence reduces those errors by one third. [Benjamin Müller & co]

Until 1873, Japanese hours varied by season. There were six hours between sunrise and sunset, so a daylight hour in summer was 1/3rd longer than an hour in winter. [Sara J. Schechner]

48 other fascinating finds at: “52 things I learned in 2021,” from @TomWhitwell.

* Leonardo da Vinci

###

As we live and learn, we might recall that it was on this date in 1545, in response to the Protestant Reformation, that the Council of Trent (Concilium Tridentinum) was convened by the Roman Catholic Church. Its work concluded in 1563; and its results were published in 1564, condemning what the Catholic Church deemed to be the heresies of Protestants.  The embodiment of the Counter-Reformation, The Council of Trent established a firm and permanent distinction between the two practices of faith.

200px-Concilio_Trento_Museo_Buonconsiglio
Council of Trent (painting in the Museo del Palazzo del Buonconsiglio, Trento)

source

“Nothing from nothing ever yet was born”*…

Lacy M. Johnson argues that there is no hierarchy in the web of life…

… Humans have been lumbering around the planet for only a half million years, the only species young and arrogant enough to name ourselves sapiens in genus Homo. We share a common ancestor with gorillas and whales and sea squirts, marine invertebrates that swim freely in their larval phase before attaching to rocks or shells and later eating their own brain. The kingdom Animalia, in which we reside, is an offshoot of the domain Eukarya, which includes every life-form on Earth with a nucleus—humans and sea squirts, fungi, plants, and slime molds that are ancient by comparison with us—and all these relations occupy the slenderest tendril of a vast and astonishing web that pulsates all around us and beyond our comprehension.

The most recent taxonomies—those based on genetic evidence that evolution is not a single lineage, but multiple lineages, not a branch that culminates in a species at its distant tip, but a network of convergences—have moved away from their histories as trees and chains and ladders. Instead, they now look more like sprawling, networked webs that trace the many points of relation back to ever more ancient origins, beyond our knowledge or capacity for knowing, in pursuit of the “universal ancestors,” life-forms that came before metabolism, before self-replication—the several-billion-year-old plasmodial blobs from which all life on Earth evolved. We haven’t found evidence for them yet, but we know what we’re looking for: they would be simple, small, and strange.

Slime molds can enter stasis at any stage in their life cycle—as an amoeba, as a plasmodium, as a spore— whenever their environment or the climate does not suit their preferences or needs. The only other species who have this ability are the so-called “living fossils” such as tardigrades and Notostraca (commonly known as water bears and tadpole shrimp, respectively). The ability to become dormant until conditions are more favorable for life might be one of the reasons slime mold has survived as long as it has, through dozens of geologic periods, countless ice ages, and the extinction events that have repeatedly wiped out nearly all life on Earth.

Slime mold might not have evolved much in the past two billion years, but it has learned a few things during that time. In laboratory environments, researchers have cut Physarum polycephalum into pieces and found that it can fuse back together within two minutes. Or, each piece can go off and live separate lives, learn new things, and return later to fuse together, and in the fusing, each individual can teach the other what it knows, and can learn from it in return.

Though, in truth, “individual” is not the right word to use here, because “individuality”—a concept so central to so many humans’ identities—doesn’t apply to the slime mold worldview. A single cell might look to us like a coherent whole, but that cell can divide itself into countless spores, creating countless possible cycles of amoeba to plasmodium to aethalia, which in turn will divide and repeat the cycle again. It can choose to “fruit” or not, to reproduce sexually or asexually or not at all, challenging every traditional concept of “species,” the most basic and fundamental unit of our flawed and imprecise understanding of the biological world. As a consequence, we have no way of knowing whether slime molds, as a broad class of beings, are stable or whether climate change threatens their survival, as it does our own. Without a way to count their population as a species, we can’t measure whether they are endangered or thriving. Should individuals that produce similar fruiting bodies be considered a species? What if two separate slime molds do not mate but share genetic material? The very idea of separateness seems antithetical to slime mold existence. It has so much to teach us…

More at: “What Slime Knows,” from @lacymjohnson in @Orion_Magazine.

See also, “Slime Molds Remember — but Do They Learn?” (from whence the image above) and “Now, in addition to penicillin, we can credit mold with elegant design.”

* Lucretius, On the Nature of Things

###

As we contemplate kinship, we might send insightful birthday greetings to Johann Hedwig; he was born on this date in 1730. A botanist noted for his study of mosses, he is considered “the father of bryology” (the study of mosses… cousins of mold).

source

Written by (Roughly) Daily

December 8, 2021 at 1:00 am

“Curiosity has its own reason for existence”*…

This is, as nearly as I can tell, the 5,000th (Roughly) Daily post (4,505 blog posts, preceded by 495 email-only pieces). On this numerologically-significant occasion, my deep thanks to readers past and present. It seems appropriate to devote this post to the impulse that has powered (Roughly) Daily from the start, curiosity– free-range curiosity…

Recently I read a terrific blog post by CJ Eller where he talks about the value of paying attention to offbeat things.

Eller was joining an online conversation about how people get caught up in the “status and celebrity game” when they’re trying to grow their audience. They become overly obsessed with following — and emulating, and envying— the content of people with massive audiences. The conversation started with this poignant essay by the author Ali Montag; she concludes that rabidly chasing followers endows your writing (and thinking!) with “inescapable mediocrity.” (It also tends to make you miserable, too, she points out)…

Instead of crowding your attention with what’s already going viral on the intertubes, focus on the weird stuff. Hunt down the idiosyncratic posts and videos that people are publishing, oftentimes to tiny and niche audiences. It’s decidedly unviral culture — but it’s more likely to plant in your mind the seed of a rare, new idea.

I love the idea of “rewilding your attention”. It puts a name on something I’ve been trying to do for a while now: To stop clicking on the stuff big-tech algorithms push at me… social behavior can influence our attention: What are the high-follower-count folks talking/posting/arguing about today? This isn’t always a bad thing. We’re social animals, so we’re necessarily (and often productively) intrigued by what others are chewing over. But as these three writers note, it’s also crucial to follow your own signal — to cultivate the stuff you’re obsessed with, even if few others are.

On top of the social pressure from people online, there’s technological pressure too — from recommendation systems trying to juke our attention… Medium’s algorithm has deduced that … I’m a nerd. They are correct! I am. The other major social networks, like Twitter or YouTube, offer me the same geek-heavy recommendations when I log in. And hey, they’re not wrong either; I really do like these subjects.

But … I’m also interested in so many other things that are far outside these narrow lanes. I am, for example, a Canadian who’s deeply into Canadian art, and a musician who spends a lot of time thinking about composition and gear and lyric-writing and production and guitar pedals, and a father who thinks a lot about the culture my kids show me, and I have a super-snobby fanboy love of the 18th century poet Alexander Pope.

You’re the same way; you contain your own Whitmanian multitudes, your pockets of woolly-eyed obsession. We all do.

But our truly quirky dimensions are never really grasped by these recommendation algorithms. They have all the dullness of a Demographics 101 curriculum; they sketch our personalities with the crudity of crime-scene chalk-outlines. They’re not wrong about us; but they’re woefully incomplete. This is why I always get a slightly flattened feeling when I behold my feed, robotically unloading boxes of content from the same monotonous conveyor-belt of recommendations, catered to some imaginary marketing version of my identity. It’s like checking my reflection in the mirror and seeing stock-photo imagery.

The other problem with big-tech recommendation systems is they’re designed by people who are convinced that “popularity” and “recency” equal “valuable”. They figure that if they sample the last 15 milliseconds of the global zeitgeist and identify what’s floated to the top of that quantum foam, I’ll care about it. Hey, a thing happened and people are talking about it, here’s the #hashtag!

And again … they’re sometimes right! I am often intrigued to know the big debates of the day, like Oscar Wilde peering into his daily gazette. But I’d also like to stumble over arguments yet more arcane, and material that will never be the subject of a massive online conversation because only a small group of oddballs care about it.

You’re the same way too, I bet. We’re all weird in different ways, but we’re all weird.

Big-tech recommendation systems have been critiqued lately for their manifold sins— i.e. how their remorseless lust for “engagement” leads them to overpromote hotly emotional posts; how they rile people up; how they feed us clicktastic disinfo; how they facilitate “doomscrolling”. All true.

But they pose a subtler challenge, too, for our imaginative lives: their remarkably dull conception of what’s interesting. It’s like intellectual monocropping. You open your algorithmic feed and see rows and rows of neatly planted corn, and nothing else.

That’s why I so enjoy the concept of “rewilding”… For me, it’s meant slowly — over the last few years — building up a big, rangy collection of RSS feeds, that let me check up on hundreds of electic blogs and publications and people. (I use Feedly.) I’ve also started using Fraidycat, a niftily quixotic feed-reader that lets you sort sources into buckets by “how often should I check this source”, which is a cool heuristic; some people/sites you want to check every day, and others, twice a year.

Other times I spend an hour or two simply prospecting — I pick a subject almost at random, then check to see if there’s a hobbyist or interest-group discussion-board devoted to it. (There usually is, running on free warez like phpBB). Then I’ll just trawl through the forum, to find out what does this community care about? It’s like a psychogeographic walk of the mind.

Another awesome technology for rewilding my attention, I’ve found, is the good old-fashioned paper book. I go to a bookstore, pick up something where it’s not immediately obvious why it’d appeal to me, then flip around to see if anything catches my eye. (This works online, too, via the wonderful universe of pre-1923, freely-accessible ebooks and publications at the Internet Archive, Project Gutenberg, or even Google Books. Pre-WWI material is often super odd and thought-provoking.)…

Step away from algorithmic feeds. In praise of free-range curiosity: “Rewilding your attention,” from Clive Thompson (@pomeranian99).

See also “Before Truth: Curiosity, Negative Capability, Humility, ” from Will Wilkinson (@willwilkinson)

* “Curiosity has its own reason for existence. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, of the marvelous structure of reality. It is enough if one tries merely to comprehend a little of this mystery each day.” — Albert Einstein, “Old Man’s Advice to Youth: ‘Never Lose a Holy Curiosity.'” LIFE Magazine (2 May 1955) p. 64”

###

As we revel in rabbit holes, we might send insightfully-humorous birthday greetings to William Penn Adair Rogers; he was born on this date in 1879.  A stage and motion picture actor, vaudeville performer, cowboy, humorist, newspaper columnist, and social commentator, he traveled around the world three times, made 71 films (50 silent films and 21 “talkies”), and wrote more than 4,000 nationally syndicated newspaper columns.  By the mid-1930s Rogers was hugely popular in the United States, its leading political wit and the highest paid Hollywood film star.  He died in 1935 with aviator Wiley Post when their small airplane crashed in northern Alaska.

Known as “Oklahoma’s Favorite Son,” Rogers was a Cherokee citizen, born to a Cherokee family in Indian Territory (now part of Oklahoma).

“I am not a member of an organized political party. I am a Democrat.”- Will Rogers

220px-Will_Rogers_1922

source