Posts Tagged ‘web’
“May all beings have happy minds”*…
But then it’s important to be careful as to how we look for that happiness…
– Games where players either remove pieces from a pile or add pieces to it, with the loser being the one who causes the heap to shake (similar to the modern game pick-up sticks)
– Games of throwing dice
– Ball games
– Guessing a friend’s thoughts
Just a few of the entries in “List of games that Buddha would not play,” from the T. W. Rhys Davids‘ translation of the Brahmajāla Sutta (though the list is duplicated in a number of other early Buddhist texts, including the Vinaya Pitaka).
(TotH to Scott Alexander; image above: source)
* the Buddha
###
As we endeavor for enlightenment, we might recall that it was on this date in 2001 that Wikipedia was born. A free online encyclopedia that is collaboratively edited by volunteers, it has grown to be the world’s largest reference website, attracting 1.7 billion unique-device visitors monthly as of November 2021. As of January 9, 2022, it has more than fifty-eight million articles in more than 300 languages, including 6,436,030 articles in English (serving 42,848,899 active users of English Wikipedia), with 118,074 active contributors in the past month.
“Create more value than you capture”*…
A thoughtful consideration of Web 3.0 from the always-insightful Tim O’Reilly…
There’s been a lot of talk about Web3 lately, and as the person who defined “Web 2.0” 17 years ago, I’m often asked to comment. I’ve generally avoided doing so because most prognostications about the future turn out to be wrong. What we can do, though, is to ask ourselves questions that help us see more deeply into the present, the soil in which the future is rooted. As William Gibson famously said, “The future is already here. It’s just not evenly distributed yet.” We can also look at economic and social patterns and cycles, using as a lens the observation ascribed to Mark Twain that “history doesn’t repeat itself, but it rhymes.”
Using those filters, what can we say about Web3?…
There follows a fascinating– and educational– analysis of the state of play and the issues that we face.
Tim concludes…
Let’s focus on the parts of the Web3 vision that aren’t about easy riches, on solving hard problems in trust, identity, and decentralized finance. And above all, let’s focus on the interface between crypto and the real world that people live in, where, as Matthew Yglesias put it when talking about housing inequality, “a society becomes wealthy over time by accumulating a stock of long-lasting capital goods.” If, as Sal Delle Palme argues, Web3 heralds the birth of a new economic system, let’s make it one that increases true wealth—not just paper wealth for those lucky enough to get in early but actual life-changing goods and services that make life better for everyone.
“Why it’s too early to get excited about Web3,” from @timoreilly.
See also: “My first impressions of web3” from Matthew Rosenfeld (AKA Moxie Marlinspike, @moxie, founder of @signalapp).
* Tim O’Reilly
###
As we focus on first principles, we might recall that it was on this date in 2007 that Steve Jobs introduced the iPhone at MacWorld. The phone wasn’t available for sale until June 29th, occasioning one of the most heavily anticipated sales launches in the history of technology. Apple sold 1.4 million iPhones in 2007, steadily increasing each year; estimated sales in 2021 are 240-250 million.
“Nothing from nothing ever yet was born”*…
Lacy M. Johnson argues that there is no hierarchy in the web of life…
… Humans have been lumbering around the planet for only a half million years, the only species young and arrogant enough to name ourselves sapiens in genus Homo. We share a common ancestor with gorillas and whales and sea squirts, marine invertebrates that swim freely in their larval phase before attaching to rocks or shells and later eating their own brain. The kingdom Animalia, in which we reside, is an offshoot of the domain Eukarya, which includes every life-form on Earth with a nucleus—humans and sea squirts, fungi, plants, and slime molds that are ancient by comparison with us—and all these relations occupy the slenderest tendril of a vast and astonishing web that pulsates all around us and beyond our comprehension.
The most recent taxonomies—those based on genetic evidence that evolution is not a single lineage, but multiple lineages, not a branch that culminates in a species at its distant tip, but a network of convergences—have moved away from their histories as trees and chains and ladders. Instead, they now look more like sprawling, networked webs that trace the many points of relation back to ever more ancient origins, beyond our knowledge or capacity for knowing, in pursuit of the “universal ancestors,” life-forms that came before metabolism, before self-replication—the several-billion-year-old plasmodial blobs from which all life on Earth evolved. We haven’t found evidence for them yet, but we know what we’re looking for: they would be simple, small, and strange.
…
Slime molds can enter stasis at any stage in their life cycle—as an amoeba, as a plasmodium, as a spore— whenever their environment or the climate does not suit their preferences or needs. The only other species who have this ability are the so-called “living fossils” such as tardigrades and Notostraca (commonly known as water bears and tadpole shrimp, respectively). The ability to become dormant until conditions are more favorable for life might be one of the reasons slime mold has survived as long as it has, through dozens of geologic periods, countless ice ages, and the extinction events that have repeatedly wiped out nearly all life on Earth.
Slime mold might not have evolved much in the past two billion years, but it has learned a few things during that time. In laboratory environments, researchers have cut Physarum polycephalum into pieces and found that it can fuse back together within two minutes. Or, each piece can go off and live separate lives, learn new things, and return later to fuse together, and in the fusing, each individual can teach the other what it knows, and can learn from it in return.
Though, in truth, “individual” is not the right word to use here, because “individuality”—a concept so central to so many humans’ identities—doesn’t apply to the slime mold worldview. A single cell might look to us like a coherent whole, but that cell can divide itself into countless spores, creating countless possible cycles of amoeba to plasmodium to aethalia, which in turn will divide and repeat the cycle again. It can choose to “fruit” or not, to reproduce sexually or asexually or not at all, challenging every traditional concept of “species,” the most basic and fundamental unit of our flawed and imprecise understanding of the biological world. As a consequence, we have no way of knowing whether slime molds, as a broad class of beings, are stable or whether climate change threatens their survival, as it does our own. Without a way to count their population as a species, we can’t measure whether they are endangered or thriving. Should individuals that produce similar fruiting bodies be considered a species? What if two separate slime molds do not mate but share genetic material? The very idea of separateness seems antithetical to slime mold existence. It has so much to teach us…
More at: “What Slime Knows,” from @lacymjohnson in @Orion_Magazine.
See also, “Slime Molds Remember — but Do They Learn?” (from whence the image above) and “Now, in addition to penicillin, we can credit mold with elegant design.”
* Lucretius, On the Nature of Things
###
As we contemplate kinship, we might send insightful birthday greetings to Johann Hedwig; he was born on this date in 1730. A botanist noted for his study of mosses, he is considered “the father of bryology” (the study of mosses… cousins of mold).
“Foresight begins when we accept that we are now creating a civilization of risk”*…
There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…
In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.
That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.
“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.
Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.
Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.
“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…
As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?“
Agre’s papers are here.
* Jacques Ellul
###
As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.
Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.
After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.










You must be logged in to post a comment.