(Roughly) Daily

Posts Tagged ‘Jonathan Zittrain

“Doing research on the Web is like using a library assembled piecemeal by pack rats and vandalized nightly”*…

But surely, argues Jonathan Zittrain, it shouldn’t be that way…

Sixty years ago the futurist Arthur C. Clarke observed that any sufficiently advanced technology is indistinguishable from magic. The internet—how we both communicate with one another and together preserve the intellectual products of human civilization—fits Clarke’s observation well. In Steve Jobs’s words, “it just works,” as readily as clicking, tapping, or speaking. And every bit as much aligned with the vicissitudes of magic, when the internet doesn’t work, the reasons are typically so arcane that explanations for it are about as useful as trying to pick apart a failed spell.

Underpinning our vast and simple-seeming digital networks are technologies that, if they hadn’t already been invented, probably wouldn’t unfold the same way again. They are artifacts of a very particular circumstance, and it’s unlikely that in an alternate timeline they would have been designed the same way.

The internet’s distinct architecture arose from a distinct constraint and a distinct freedom: First, its academically minded designers didn’t have or expect to raise massive amounts of capital to build the network; and second, they didn’t want or expect to make money from their invention.

The internet’s framers thus had no money to simply roll out a uniform centralized network the way that, for example, FedEx metabolized a capital outlay of tens of millions of dollars to deploy liveried planes, trucks, people, and drop-off boxes, creating a single point-to-point delivery system. Instead, they settled on the equivalent of rules for how to bolt existing networks together.

Rather than a single centralized network modeled after the legacy telephone system, operated by a government or a few massive utilities, the internet was designed to allow any device anywhere to interoperate with any other device, allowing any provider able to bring whatever networking capacity it had to the growing party. And because the network’s creators did not mean to monetize, much less monopolize, any of it, the key was for desirable content to be provided naturally by the network’s users, some of whom would act as content producers or hosts, setting up watering holes for others to frequent.

Unlike the briefly ascendant proprietary networks such as CompuServe, AOL, and Prodigy, content and network would be separated. Indeed, the internet had and has no main menu, no CEO, no public stock offering, no formal organization at all. There are only engineers who meet every so often to refine its suggested communications protocols that hardware and software makers, and network builders, are then free to take up as they please.

So the internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks. Tim Berners-Lee took up the invite and invented the protocols for the World Wide Web, an application to run on the internet. If your computer spoke “web” by running a browser, then it could speak with servers that also spoke web, naturally enough known as websites. Pages on sites could contain links to all sorts of things that would, by definition, be but a click away, and might in practice be found at servers anywhere else in the world, hosted by people or organizations not only not affiliated with the linking webpage, but entirely unaware of its existence. And webpages themselves might be assembled from multiple sources before they displayed as a single unit, facilitating the rise of ad networks that could be called on by websites to insert surveillance beacons and ads on the fly, as pages were pulled together at the moment someone sought to view them.

And like the internet’s own designers, Berners-Lee gave away his protocols to the world for free—enabling a design that omitted any form of centralized management or control, since there was no usage to track by a World Wide Web, Inc., for the purposes of billing. The web, like the internet, is a collective hallucination, a set of independent efforts united by common technological protocols to appear as a seamless, magical whole.

This absence of central control, or even easy central monitoring, has long been celebrated as an instrument of grassroots democracy and freedom. It’s not trivial to censor a network as organic and decentralized as the internet. But more recently, these features have been understood to facilitate vectors for individual harassment and societal destabilization, with no easy gating points through which to remove or label malicious work not under the umbrellas of the major social-media platforms, or to quickly identify their sources. While both assessments have power to them, they each gloss over a key feature of the distributed web and internet: Their designs naturally create gaps of responsibility for maintaining valuable content that others rely on. Links work seamlessly until they don’t. And as tangible counterparts to online work fade, these gaps represent actual holes in humanity’s knowledge…

The glue that holds humanity’s knowledge together is coming undone: “The Internet Is Rotting.” @zittrain explains what we can do to heal it.

(Your correspondent seconds his call to support the critically-important work of The Internet Archive and the Harvard Library Innovation Lab, along with the other initiatives he outlines.)

* Roger Ebert

###

As we protect our past for the future, we might recall that it was on this date in 1937 that Hormel introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.

source

“All human beings have three lives: public, private, and secret”*…

 

Privacy

A monitor displays the Omron Corp. Okao face- and emotion-detection technology during CES 2020

 

Twenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.

“Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy.

That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending.

For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times. According to the article, the company scraped billions of photos from social-networking and other sites on the web—without permission from the sites in question, or the users who submitted them—and built a comprehensive database of labeled faces primed for search by facial recognition. Their early customers included multiple police departments (and individual officers), which used the tool without warrants. Clearview has argued they have a right to the data because they’re “public.”

In general, searching by a face to gain a name and then other information is on the verge of wide availability: The Russian internet giant Yandex appears to have deployed facial-recognition technology in its image search tool. If you upload an unlabeled picture of my face into Google image search, it identifies me and then further searches my name, and I’m barely a public figure, if at all.

Given ever more refined surveillance, what might the world look like if we were to try to “get over” the loss of this privacy? Two very different extrapolations might allow us to glimpse some of the consequences of our privacy choices (or lack thereof) that are taking shape even today…

From Jonathan Zittrain (@zittrain), two scenarios for a post-privacy future: “A World Without Privacy Will Revive the Masquerade.”

* Gabriel García Márquez

###

As we get personal, we might send provocatively nonsensical birthday greetings to Hugo Ball; he was born on this date in 1886.  Ball worked as an actor with Max Reinhardt and Hermann Bahr in Berlin until the outbreak of World War I.  A staunch pacifist, Ball made his way to Switzerland, where he turned his hand to poetry in an attempt to express his horror at the conflagration enveloping Europe. (“The war is founded on a glaring mistake, men have been confused with machines.”)

Settling in Zürich, Ball was a co-founder of the Dada movement (and, lore suggests, its namer, having allegedly picked the word at random from a dictionary).  With Tristan Tzara and Jan Arp, among others, he co-founded and presided over the Cabaret Voltaire, the epicenter of Dada.  And in 1916, he created the first Dada Manifesto (Tzara’s came two years later).

 source

 

Written by (Roughly) Daily

February 22, 2020 at 1:01 am