(Roughly) Daily

Posts Tagged ‘web

“May all beings have happy minds”*…

But then it’s important to be careful as to how we look for that happiness…

– Games where players either remove pieces from a pile or add pieces to it, with the loser being the one who causes the heap to shake (similar to the modern game pick-up sticks)

– Games of throwing dice

– Ball games

– Guessing a friend’s thoughts

Just a few of the entries in “List of games that Buddha would not play,” from the T. W. Rhys Davids‘ translation of the Brahmajāla Sutta (though the list is duplicated in a number of other early Buddhist texts, including the Vinaya Pitaka).

(TotH to Scott Alexander; image above: source)

* the Buddha

###

As we endeavor for enlightenment, we might recall that it was on this date in 2001 that Wikipedia was born. A free online encyclopedia that is collaboratively edited by volunteers, it has grown to be the world’s largest reference website, attracting 1.7 billion unique-device visitors monthly as of November 2021. As of January 9, 2022, it has more than fifty-eight million articles in more than 300 languages, including 6,436,030 articles in English (serving 42,848,899 active users of English Wikipedia), with 118,074 active contributors in the past month.

source

“Create more value than you capture”*…

A thoughtful consideration of Web 3.0 from the always-insightful Tim O’Reilly

There’s been a lot of talk about Web3 lately, and as the person who defined “Web 2.0” 17 years ago, I’m often asked to comment. I’ve generally avoided doing so because most prognostications about the future turn out to be wrong. What we can do, though, is to ask ourselves questions that help us see more deeply into the present, the soil in which the future is rooted. As William Gibson famously said, “The future is already here. It’s just not evenly distributed yet.” We can also look at economic and social patterns and cycles, using as a lens the observation ascribed to Mark Twain that “history doesn’t repeat itself, but it rhymes.”

Using those filters, what can we say about Web3?…

There follows a fascinating– and educational– analysis of the state of play and the issues that we face.

Tim concludes…

Let’s focus on the parts of the Web3 vision that aren’t about easy riches, on solving hard problems in trust, identity, and decentralized finance. And above all, let’s focus on the interface between crypto and the real world that people live in, where, as  Matthew Yglesias put it when talking about housing inequality, “a society becomes wealthy over time by accumulating a stock of long-lasting capital goods.” If, as Sal Delle Palme argues, Web3 heralds the birth of a new economic system, let’s make it one that increases true wealth—not just paper wealth for those lucky enough to get in early but actual life-changing goods and services that make life better for everyone.

Why it’s too early to get excited about Web3,” from @timoreilly.

See also: “My first impressions of web3” from Matthew Rosenfeld (AKA Moxie Marlinspike, @moxie, founder of @signalapp).

* Tim O’Reilly

###

As we focus on first principles, we might recall that it was on this date in 2007 that Steve Jobs introduced the iPhone at MacWorld. The phone wasn’t available for sale until June 29th, occasioning one of the most heavily anticipated sales launches in the history of technology. Apple sold 1.4 million iPhones in 2007, steadily increasing each year; estimated sales in 2021 are 240-250 million.

source

“Nothing from nothing ever yet was born”*…

Lacy M. Johnson argues that there is no hierarchy in the web of life…

… Humans have been lumbering around the planet for only a half million years, the only species young and arrogant enough to name ourselves sapiens in genus Homo. We share a common ancestor with gorillas and whales and sea squirts, marine invertebrates that swim freely in their larval phase before attaching to rocks or shells and later eating their own brain. The kingdom Animalia, in which we reside, is an offshoot of the domain Eukarya, which includes every life-form on Earth with a nucleus—humans and sea squirts, fungi, plants, and slime molds that are ancient by comparison with us—and all these relations occupy the slenderest tendril of a vast and astonishing web that pulsates all around us and beyond our comprehension.

The most recent taxonomies—those based on genetic evidence that evolution is not a single lineage, but multiple lineages, not a branch that culminates in a species at its distant tip, but a network of convergences—have moved away from their histories as trees and chains and ladders. Instead, they now look more like sprawling, networked webs that trace the many points of relation back to ever more ancient origins, beyond our knowledge or capacity for knowing, in pursuit of the “universal ancestors,” life-forms that came before metabolism, before self-replication—the several-billion-year-old plasmodial blobs from which all life on Earth evolved. We haven’t found evidence for them yet, but we know what we’re looking for: they would be simple, small, and strange.

Slime molds can enter stasis at any stage in their life cycle—as an amoeba, as a plasmodium, as a spore— whenever their environment or the climate does not suit their preferences or needs. The only other species who have this ability are the so-called “living fossils” such as tardigrades and Notostraca (commonly known as water bears and tadpole shrimp, respectively). The ability to become dormant until conditions are more favorable for life might be one of the reasons slime mold has survived as long as it has, through dozens of geologic periods, countless ice ages, and the extinction events that have repeatedly wiped out nearly all life on Earth.

Slime mold might not have evolved much in the past two billion years, but it has learned a few things during that time. In laboratory environments, researchers have cut Physarum polycephalum into pieces and found that it can fuse back together within two minutes. Or, each piece can go off and live separate lives, learn new things, and return later to fuse together, and in the fusing, each individual can teach the other what it knows, and can learn from it in return.

Though, in truth, “individual” is not the right word to use here, because “individuality”—a concept so central to so many humans’ identities—doesn’t apply to the slime mold worldview. A single cell might look to us like a coherent whole, but that cell can divide itself into countless spores, creating countless possible cycles of amoeba to plasmodium to aethalia, which in turn will divide and repeat the cycle again. It can choose to “fruit” or not, to reproduce sexually or asexually or not at all, challenging every traditional concept of “species,” the most basic and fundamental unit of our flawed and imprecise understanding of the biological world. As a consequence, we have no way of knowing whether slime molds, as a broad class of beings, are stable or whether climate change threatens their survival, as it does our own. Without a way to count their population as a species, we can’t measure whether they are endangered or thriving. Should individuals that produce similar fruiting bodies be considered a species? What if two separate slime molds do not mate but share genetic material? The very idea of separateness seems antithetical to slime mold existence. It has so much to teach us…

More at: “What Slime Knows,” from @lacymjohnson in @Orion_Magazine.

See also, “Slime Molds Remember — but Do They Learn?” (from whence the image above) and “Now, in addition to penicillin, we can credit mold with elegant design.”

* Lucretius, On the Nature of Things

###

As we contemplate kinship, we might send insightful birthday greetings to Johann Hedwig; he was born on this date in 1730. A botanist noted for his study of mosses, he is considered “the father of bryology” (the study of mosses… cousins of mold).

source

Written by (Roughly) Daily

December 8, 2021 at 1:00 am

“In the lingo, this imaginary place is known as the Metaverse”*…

Ethan Zuckerman on the history of enthusiastically working to make a dystopian vision real…

In a booth at Ted’s Fish Fry, in Troy, New York, my friend Daniel Beck and I sketched out our plans for the metaverse. It was November 1994, just as the graphical web was becoming a thing, and we thought that the 3-D web could be just a few tweaks down the road. In our version of the metaverse, a server would track the identity of objects and their location in virtual space, but you’d render the objects locally, loaded to your hard drive off of a CD-ROM. It made a certain sense: Most users were on sub-56k modems, and AOL was shipping out enough CD-ROMs to pave Los Angeles each week.

To be very clear, Daniel and I were in no way being original. We were hoping to re-create the vision that Neal Stephenson had outlined in his 1992 book, Snow Crash. We were both (barely) self-conscious enough to understand that Snow Crash took place in a dystopia, and that Stephenson was positing a beautiful virtual world because the outside world had become so shitty that no one wanted to live in it. But we were young and naive and believed that our metaverse would rock. (Stephenson, of course, wasn’t being entirely original either. His vision of the metaverse owed a debt to Vernor Vinge’s 1981 True Names and to a series of William Gibson novels from the ’80s. Both of those authors owed a debt to Morton Heilig’s 1962 Sensorama machine, and on and on we go, back in time to Plato’s shadows on a cave wall.)

Daniel and I got a chance to actually build our metaverse about six months later, after we both joined Tripod as graphic designers and “webkeepers.” This was well before Tripod became a competitor to GeoCities, offering free webpages to all. (It was also before I accidentally invented pop-up ads. Sorry again about that.) Instead, we were a lifestyle magazine for recent graduates, providing smart, edgy, but practical content—“tools for life”—while hawking mutual funds to 20-somethings. When that business model didn’t take off (can’t imagine why), the half-dozen folks in the “tech cave” revived the metaverse idea…

We sold our CEO on the idea by telling him that the MOO could be a simulation of life in the big city postcollege, bringing onto the site new users who wanted to experience New York City while still in Ann Arbor or State College. And remember, this was 1995: The photos we used to represent this metaverse of ours were taken on chemical film! Which we then developed at a photo-processing lab! And then scanned on a flatbed scanner!

The MOO was really cool, in theory. Most people weren’t building HTML-enabled multiplayer spaces in 1995. It got us our first round of venture-capital funding, demonstrating to our investors that we weren’t just kids translating mutual-fund propaganda into HTML. We were technology innovators. We were building things no one had ever seen before.

But here’s the thing: The MOO was garbage. On a good day, I could give a demo that made it look smooth, slick, and fun to use. But our CEO couldn’t. And that was a problem. It wasn’t his fault. The MOO was buggy and quirky and demanded that you think of the world as a set of six-sided cubes made up of webpages. Our boss pulled the plug on the project, telling us, “I know it’s the future, but if I can’t use it, I can’t sell it to investors.”

I watched other metaverses rise and fall. An Icelandic firm, OZ Virtual, introduced a metaverse with 3-D avatars in sexy streetwear dancing on an infinite dance floor, which felt like the future for a few days. OZ Virtual used VRML, a format for specifying 3-D objects in an HTML-like language that was all the rage for a few months in 1996. Netscape supported it via a plug-in, and Blaxxun built a 3-D chat space. Don’t remember these moments of web history? Neither does the web, for the most part. Wikipedia’s thorough, but not comprehensive, timeline of virtual environments misses our MOO, the Icelandic dance club, and half a dozen other early virtual experiments. (By the way: “Blaxxun”? That’s another Stephenson reference, to Black Sun Systems, the fictional company that created Stephenson’s fictional metaverse. Very creative, guys.)

And then there was Second Life. When Linden Lab launched this metaverse in 2003, there was a brief burst of enthusiasm where otherwise serious entities, such as businesses and universities, bought and built out their own islands in Linden’s proprietary world. (Harvard’s Berkman Center for Internet and Society, now the Berkman Klein Center, had its own island.) The learning curve to build objects in Second Life was steep, the universe was populated haphazardly, and the Second Life client demanded a very fast computer and a very patient user…

So, after watching metaverses spring up and crumble for 27 years, and after building one myself, I feel fairly well equipped to offer context for what Mark Zuckerberg is trying to do with his firm’s pivot to “Meta.” In his heavily produced keynote video for Facebook Reality Labs, Zuckerberg starts by acknowledging that this is a bizarre time for the company to be launching a new product line—Facebook is under more scrutiny than ever for its ill effects on individuals and societies, and for the company’s utter unwillingness to address these issues.

But why bother with that mess? Or, as Zuckerberg put it: “Now, I know that some people will say that this isn’t a time to focus on the future. And I want to acknowledge that there are important issues to work on in the present. There always will be. So for many people, I’m just not sure there ever will be a good time to focus on the future.” Allow me to translate: Fuck you, haters.

Let’s be frank about this: Facebook’s metaverse sucks. From the first images in which legless torsos sit around a conference room, staring at a Zoom-like videoconferencing screen, to Zuckerberg’s tour of his virtual closet, filled with identical black outfits (see, he’s got a sense of humor!), Zuck’s metaverse looks pretty much like we imagined one would look like in 1994. Look, I’m playing cards with my friends and we’re in zero gravity! And one of my friends is a robot! You could do this in Second Life 10 years ago, and in somewhat angular vectors in VRML 20 years ago…

The metaverse Zuckerberg shows off [is] promising future technologies that are five to 10 years off. But it still looks like junk. The fire in his fireplace is a roughly rendered glow. His superhero secret lair looks out over a paradise island that’s almost entirely static. There’s the nominal motion of waves, but none of the foliage moves. It’s tropical wallpaper pasted to virtual windows. The sun is setting behind Zuckerberg’s left shoulder, but he’s being lit from the right front. Even with a bajillion dollars to invest in a video to relaunch and rename his company, Zuckerberg’s team is showing just how difficult it is to create a visually believable virtual world.

But that’s not the problem with Zuckerberg’s metaverse. The problem is that it’s boring. The futures it imagines have been imagined a thousand times before, and usually better. Two old men chat over a chessboard, one in Barcelona, one in New York, much as they did on Minitel in the 1980s. There’s virtual Ping-Pong and surfing, you know, like on a Wii. You can watch David Attenborough nature documentaries, like you do on Netflix. You can videoconference with your workmates … you know, like you do every single day.

Zuckerberg isn’t building the metaverse because he has a remarkable new vision of how things could be. There’s not an original thought in his video, including the business model. Thirty-eight minutes in, Zuckerberg gets serious, talking about how humbling the past few years have been for him and his business. Remember, he’s not humbled by the problem of Russian disinformation, or the spread of anti-vax misinformation, or the challenge of how Instagram affects teen body image. No, he’s humbled by how hard it is to fight against Apple and Google.

Faced with the question of whether Facebook’s core products are eroding the foundations of a democratic society, Zuckerberg takes on a more pressing problem: Apple’s 30 percent cut on digital goods sold in its App Store. Never fear, though: With a Facebook ecosystem, Facebook developer tools, and Facebook marketplaces, the custom skin you buy in one video game will be wearable in another video game, just like Mark’s black T-shirt. Just as long as that video game is in Facebook’s metaverse. (Meta’s metaverse? Meta’s verse?) And if you want Mark’s actual digital shirt, it will almost certainly be available as an NFT, which the launch video promises will be supported. Did I mention how dystopian this all is?

Facebook can claim originality in at least one thing. Its combination of scale and irresponsibility has unleashed a set of diverse and fascinating sociopolitical challenges that it will take lawmakers, scholars, and activists at least a generation to fix. If Facebook has learned anything from 17 years of avoiding mediating those conflicts, it’s not apparent from the vision for the metaverse, where the power of human connection is celebrated as uncritically as it was before Macedonian fake-news brokers worked to sway the 2016 election…

Neal Stephenson’s metaverse has been a lasting creation because it’s fictional. It doesn’t have to solve all the intricate problems of content moderation and extremism and interpersonal interaction to raise questions about what virtual worlds can give us and what our real world lacks. Today’s metaverse creators are missing the point, just like I missed the point back at Ted’s Fish Fry in 1994. The metaverse isn’t about building perfect virtual escape hatches—it’s about holding a mirror to our own broken, shared world. Facebook’s promised metaverse is about distracting us from the world it’s helped break.

It was terrible then, and it’s terrible now: “Hey, Facebook, I Made a Metaverse 27 Years Ago,” from @EthanZ.

For a nuanced (and provocatively-“optimistic”) look at what a metaverse like Facebook’s could yield if in fact it worked (and then morphed), see Corey J. Whites‘s (@cjwhite) Repo Virtual.

And as (and for the reasons) noted in an earlier post, see “The Metaverse Is Bad,” from Ian Bogost (@ibogost)

* Neal Stephenson, Snow Crash

###

As we think twice, we might send adventurous birthday greetings to Giovanni Battista Belzoni; he was born on this date in 1778.  The 14th child of a poor barber in Padua, he was a barber, a Capuchin monk, a magician, and a circus strongman before finding his true calling– explorer (and plunderer) of Egyptian antiquities.

Belzoni’s call to action came when he met a British Consul-General named Henry Salt who persuaded him to gather Egyptian treasures to send back to the British Museum.  Under extremely adverse conditions he transported the colossal granite head of Rameses II from Thebes to England, where it is now one of the treasures of the British Museum. Later, he discovered six major royal tombs in the Valley of the Kings, including that of Seti I, and brought to the British Museum a spectacular collection of Egyptian antiquities. He was the first person to penetrate the heart of the second pyramid at Giza and the first European to visit the oasis of Siwah and discover the ruined city of Berenice on the Red Sea. He stumbled into the tomb of King Ay, but only noted a wall painting of 12 baboons, leading him to name the chamber ‘Tomb of the 12 Monkeys” (because hieroglyphs had not yet been deciphered, he usually had no idea who or what he had actually found).

Belzoni had two habits that have contributed to his legacy:  he was a lover of graffiti signatures, and inscribed “Belzoni” on many of Egypt’s antique treasures, where the carvings survive to this day.  And he carried a whip: which, given that he was one of the models for Indiana Jones, became one of that character’s hallmarks.

 source

“Foresight begins when we accept that we are now creating a civilization of risk”*…

There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…

As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Agre’s papers are here.

* Jacques Ellul

###

As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.

Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.

After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.

source