(Roughly) Daily

Posts Tagged ‘knowledge

“Beware of him who would deny you access to information, for in his heart he dreams himself your master”*…

NOAA/Plotting the position of the survey ship PATHFINDER, Alaska

Stewart Brand once suggested that “Information wants to be free. Information also wants to be expensive. …That tension will not go away.” Indeed, it seems to be growing…

Aaron Swartz was 26 years old when he took his own life. He did so under the shadow of legal prosecution, pursued by government lawyers intent on maximal punishment. If found guilty, he potentially faced up to 50 years in prison and a $1 million dollar fine. Swartz’s crime was not only legal, but political. He had accessed a private computer network and gained possession of highly valuable information with the goal of sharing it. His actions threatened some of the most powerful, connected, and politically protected groups in the country. Their friends in the government were intent on sending a message.

It’s the kind of story you would expect about some far-off political dissident. But Swartz took his life in Brooklyn on a winter day in 2013 and his prosecutor was the U.S. federal government. When Swartz died, he was under indictment for 13 felony charges related to his use of an MIT computer to download too many scientific articles from the academic database JSTOR, ostensibly for the purpose of making them freely available to the public. Ultimately, Swartz potentially faced more jail time for downloading academic papers than he would have if he had helped Al Qaeda build a nuclear weapon. Even the Criminal Code of the USSR stipulated that those who stored and distributed anti-Soviet literature only faced five to seven years in prison. While prosecutors later pointed toward a potential deal for less time, Aaron would still have been labeled a felon for his actions—and to boot, JSTOR itself had reached a civil settlement and didn’t even pursue its own lawsuit.

But Aaron’s cause lived on. This September marks the ten-year anniversary of Sci-Hub, the online “shadow library” that provides access to millions of research papers otherwise hidden behind prohibitive paywalls. Founded by the Kazakhstani computer scientist Alexandra Elbakyan—popularly known as science’s “pirate queen”—Sci-Hub has grown to become a repository of over 85 million academic papers.

The site is popular globally, used by millions of people—many of whom would otherwise not be able to finish their degrees, advise their patients, or use text mining algorithms to make new scientific discoveries. Sci-Hub has become the unacknowledged foundation that helps the whole enterprise of academia to function. 

Even when they do not need to use Sci-Hub, the superior user experience it offers means that many people prefer to use the illegal site rather than access papers through their own institutional libraries. It is difficult to say how many ideas, grants, publications, and companies have been made possible by Sci-Hub, but it seems undeniable that Elbakyan’s ten-year-old website has become a crucial component of contemporary scholarship.  

The success of Sci-Hub has made it a target for injunctions and investigations. Academic publishers have sued Sci-Hub repeatedly, opponents have accused the site’s creators of collaborating with Russian intelligence, and private sector critics have dubbed it a threat to “national security.” Elbakyan recently tweeted out a notification she received that the FBI had requested her personal data from Apple. 

Whatever happens to Sci-Hub or Elbakyan, the fact that such a site exists is something of a tragedy. Sci-Hub currently fills a niche that should never have existed. Like the black-market medicine purchased by people who cannot afford prescription drugs, its very being indicts the official system that created the conditions of its emergence… 

The cost of individually purchasing all the articles required to complete a typical literature review could easily amount to thousands of dollars. Beyond paying for the articles themselves, academics often have to pay steep fees to publish their research. Meanwhile, most peer reviewers and editors charged with assessing, correcting, and formatting papers do not receive compensation for their work. 

It’s particularly ironic that this situation exists alongside a purported digital “infodemic” of misinformation. The costs of this plague are massive, from opposition to the pandemic response to the conspiracy theories that drove a loving father to fire his gun inside a pizza parlor and a man to kill a mafia boss accused of having ties to the deep state. But few public figures, if any, draw the direct connection between the expensive barricades around scientific research and the conspicuous epistemic collapse of significant portions of the American political discourse…

Whether intended or not, the impossibly high paywalls of academic publishers only serve to keep scientific information out of the population’s hands. What makes this even more discordant is that the people being prevented from accessing the information are often also the taxpayers who helped fund the research in the first place. 

By framing the debate about Sci-Hub as one concerning property rights, both advocates of Elbakyan’s site and its detractors fall afoul of what John Gall called the “operational fallacy.” In his book The Systems Bible, Gall defined the operational fallacy as a situation where “the system itself does not do what it says it is doing.” In other words, what a system calls itself is not always a reliable indicator of its true function. In this case, the name of the “academic publishing industry” implies that it is supposed to be involved in the dissemination of scholarship. But the effective function of the academic publishing industry as it actually exists is to prevent the dissemination of scholarly work. 

Given the example of Sci-Hub, the easy logistics of internet publication, and the funding structure of academic research, it seems clear that in the absence of the academic publishing industry, scholarship would be more widely available, not less. If the academic publishing industry did not exist, scientists could still do their research—in fact, it would be easier to do so as more people would have access to scholarly literature. The peer-review process could still function normally—though there are good reasons to change that as well. And the resulting papers could simply be posted in a place where anyone could read them. 

When we explore the actual function of the academic publishing industry—restricting access to scholarly research—we see that these publishers have little in common with the companies that have opposed other file-sharing sites. When several record companies sued Napster in 2001, they could make the legitimate case that the economic well-being of the musicians, producers, and all the people who were needed to distribute recorded music was at stake. No such parallel exists in the case of Sci-Hub. Scientists are not paid by the publishers. Peer reviewers are not paid by the publishers. Distribution itself, as proven by Sci-Hub and its more law-abiding brother arXiv, is cheap enough to be provided to the public for free. It’s not surprising, then, that polls reveal that scientists overwhelmingly support Sci-Hub…  

Eminently worth reading in full– the civic tragedy of academic publishing: “A World Without Sci-Hub,” from Jason Rhys Perry (@JRhysParry) in @palladiummag.

Sid Meier

###

As we share and share alike, we might recall that it was on this date in 1970 that the Public Broadcasting Service– PBS– premiered, when it took over (most of) the functions of its predecessor, National Educational Television.

Unlike the five major commercial broadcast television networks in the United States (ABC, CBS, NBC, Fox, and The CW) PBS is technically not a network, but rather a program distributor that provides television content and related services to its member stations. Each station sets its own schedule and programs local content (e.g., local/state news, interviews, cultural, and public affairs programs) that supplements content provided by PBS and other public television distributors.

source

“Reality is merely an illusion, albeit a very persistent one”*…

Objective reality has properties outside the range of our senses (and for that matter, our instruments); and studies suggest that our brains warp sensory data as soon as we collect it. So we’d do well to remember that we don’t have– and likely won’t ever have– perfect information…

Many philosophers believe objective reality exists, if “objective” means “existing as it is independently of any perception of it.” However, ideas on what that reality actually is and how much of it we can interact with vary dramatically.

Aristotle argued, in contrast to his teacher Plato, that the world we interact with is as real as it gets and that we can have knowledge of it, but he thought that the knowledge we could have about it was not quite perfect. Bishop Berkeley thought everything existed as ideas in minds — he argued against the notion of physical matter — but that there was an objective reality since everything also existed in the mind of God. Immanuel Kant, a particularly influential Enlightenment philosopher, argued that while “the thing in itself” — an object as it exists independently of being subjectively observed — is real and exists, you cannot know anything about it directly.

Today, a number of metaphysical realists maintain that external reality exists, but they also suggest that our understanding of it is an approximation that we can improve upon. There are also direct realists who argue that we can interact with the world as it is, directly. They hold that many of the things we see when we interact with objects can be objectively known, though some things, like color, are subjective traits.

While it might be granted that our knowledge of the world is not perfect and is at least sometimes subjective, that doesn’t have to mean that the physical world doesn’t exist. The trouble is how we can go about knowing anything that isn’t subjective about it if we admit that our sensory information is not perfect.

As it turns out, that is a pretty big question.

Science both points toward a reality that exists independently of how any subjective observer interacts with it and shows us how much our viewpoints can get in the way of understanding the world as it is. The question of how objective science is in the first place is also a problem — what if all we are getting is a very refined list of how things work within our subjective view of the world?

Physical experiments like the Wigner’s Friend test show that our understanding of objective reality breaks down whenever quantum mechanics gets involved, even when it is possible to run a test. On the other hand, a lot of science seems to imply that there is an objective reality about which the scientific method is pretty good at capturing information.

Evolutionary biologist and author Richard Dawkins argues:

“Science’s belief in objective truth works. Engineering technology based upon the science of objective truth, achieves results. It manages to build planes that get off the ground. It manages to send people to the moon and explore Mars with robotic vehicles on comets. Science works, science produces antibiotics, vaccines that work. So anybody who chooses to say, ‘Oh, there’s no such thing as objective truth. It’s all subjective, it’s all socially constructed.’ Tell that to a doctor, tell that to a space scientist, manifestly science works, and the view that there is no such thing as objective truth doesn’t.”

While this leans a bit into being an argument from the consequences, he has a point: Large complex systems which suppose the existence of an objective reality work very well. Any attempt to throw out the idea of objective reality still has to explain why these things work.

A middle route might be to view science as the systematic collection of subjective information in a way that allows for intersubjective agreement between people. Under this understanding, even if we cannot see the world as it is, we could get universal or near-universal intersubjective agreement about something like how fast light travels in a vacuum. This might be as good as it gets, or it could be a way to narrow down what we can know objectively. Or maybe it is something else entirely.

While objective reality likely exists, our senses might not be able to access it well at all. We are limited beings with limited viewpoints and brains that begin to process sensory data the moment we acquire it. We must always be aware of our perspective, how that impacts what data we have access to, and that other perspectives may have a grain of truth to them…

Objective reality exists, but what can you know about it that isn’t subjective? Maybe not much: “You don’t see objective reality objectively: neuroscience catches up to philosophy.”

* Albert Einstein

###

As we ponder perspective, we might send thoughtful birthday greetings to Confucius; he was born on this date in 551 BCE. A a Chinese philosopher and politician of the Spring and Autumn period, he has been traditionally considered the paragon of Chinese sages and is widely considered one of the most important and influential individuals in human history, as his teachings and philosophy formed the basis of East Asian culture and society, and continue to remain influential across China and East Asia today.

His philosophical teachings, called Confucianism, emphasized personal and governmental morality, correctness of social relationships, justice, kindness, and sincerity. Confucianism was part of the Chinese social fabric and way of life; to Confucians, everyday life was the arena of religion. It was he who espoused the well-known principle “Do not do unto others what you do not want done to yourself,” the Golden Rule.

source

Written by (Roughly) Daily

September 28, 2021 at 1:00 am

“Facts are facts and will not disappear on account of your likes”*…

This artist rendering provided by the European South Observatory shows some of the 32 new planets astronomers found outside our solar system

… That said, some facts may morph out from under us. In consideration of “in-between” facts:

When people think of knowledge, they generally think of two sorts of facts: facts that don’t change, like the height of Mount Everest or the capital of the United States, and facts that fluctuate constantly, like the temperature or the stock market close.

But in between there is a third kind: facts that change slowly. These are facts which we tend to view as fixed, but which shift over the course of a lifetime. For example: What is Earth’s population? I remember learning 6 billion, and some of you might even have learned 5 billion. Well, it turns out it’s about 6.8 billion.

Or, imagine you are considering relocating to another city. Not recognizing the slow change in the economic fortunes of various metropolitan areas, you immediately dismiss certain cities. For example, Pittsburgh, a city in the core of the historic Rust Belt of the United States, was for a long time considered to be something of a city to avoid. But recently, its economic fortunes have changed, swapping steel mills for technology, with its job growth ranked sixth in the entire United States.

These slow-changing facts are what I term “mesofacts.” Mesofacts are the facts that change neither too quickly nor too slowly, that lie in this difficult-to-comprehend middle, or meso-, scale. Often, we learn these in school when young and hold onto them, even after they change. For example, if, as a baby boomer, you learned high school chemistry in 1970, and then, as we all are apt to do, did not take care to brush up on your chemistry periodically, you would not realize that there are 12 new elements in the Periodic Table. Over a tenth of the elements have been discovered since you graduated high school! While this might not affect your daily life, it is astonishing and a bit humbling.

For these kinds of facts, the analogy of how to boil a frog is apt: Change the temperature quickly, and the frog jumps out of the pot. But slowly increase the temperature, and the frog doesn’t realize that things are getting warmer, until it’s been boiled. So, too, is it with humans and how we process information. We recognize rapid change, whether it’s as simple as a fast-moving object or living with the knowledge that humans have walked on the moon. But anything short of large-scale rapid change is often ignored. This is the reason we continue to write the wrong year during the first days of January.

Our schools are biased against mesofacts. The arc of our educational system is to be treated as little generalists when children, absorbing bits of knowledge about everything from biology to social studies to geology. But then, as we grow older, we are encouraged to specialize. This might have been useful in decades past, but in our increasingly fast-paced and interdisciplinary world, lacking an even approximate knowledge of our surroundings is unwise.

Updating your mesofacts can change how you think about the world. Do you know the percentage of people in the world who use mobile phones? In 1997, the answer was 4 percent. By 2007, it was nearly 50 percent. The fraction of people who are mobile phone users is the kind of fact you might read in a magazine and quote at a cocktail party. But years later the number you would be quoting would not just be inaccurate, it would be seriously wrong. The difference between a tiny fraction of the world and half the globe is startling, and completely changes our view on global interconnectivity.

Mesofacts can also be fun. Let’s focus for a moment on some mesofacts that can be of vital importance if you’re a child, or parent of a child: those about dinosaurs. Just a few decades ago, dinosaurs were thought to be cold-blooded, slow-witted lizards that walked with their legs splayed out beside them. Now, scientists think that many dinosaurs were warm-blooded and fast-moving creatures. And they even had feathers! Just a few weeks ago we learned about the color patterns of dinosaurs (stripes! with orange tufts!). These facts might not affect how you live your life, but then again, you’re probably not 6 years old. There is another mesofact that is unlikely to affect your daily routine, but might win you a bar bet: the number of planets known outside the solar system. After the first extrasolar planet around an ordinary star made headlines back in 1995, most people stopped paying attention. Well, the number of extrasolar planets is currently over 400. Know this, and the next round won’t be on you.

The fact that the world changes rapidly is exciting, but everyone knows about that. There is much change that is neither fast nor momentous, but no less breathtaking.

Introducing the mesofact: “Warning- Your reality is out of date,” from Samuel Arbesman (@arbesman) who went on to develop this notion in a wonderful book, The Half-Life of Facts. Via @inevernu who notes that the above article, which ran in 2010, contains examples of mesofacts that have already changed again– illustrating Arbesman’s point…

* Jawaharlal Nehru

###

As we noodle on knowledge, we might recall that it was on this date in 1642 that the first American college commencement ceremony was held at Harvard College. It was North America’s first taste of non-religious ritual– and was designed to send a clear message to England that its American colonies were a going concern. Still, of the nine seniors graduated, three soon crossed the Atlantic the other way, one to serve as a diplomat for the rebellious Oliver Cromwell and another to study medicine in Italy.

Apropos the piece above, the curriculum followed by those graduates was rather different– was filled with different facts– than those of classes in later centuries.

source

Written by (Roughly) Daily

September 23, 2021 at 1:00 am

“Wonder is the beginning of wisdom”*…

An argument for curiosity, openness, and the humility that underlies them both…

Philosophers aren’t the only ones who love wisdom. Everyone, philosopher or not, loves her own wisdom: the wisdom she has or takes herself to have. What distinguishes the philosopher is loving the wisdom she doesn’t have. Philosophy is, therefore, a form of humility: being aware that you lack what is of supreme importance. There may be no human being who exemplified this form of humility more perfectly than Socrates. It is no coincidence that he is considered the first philosopher within the Western canon.

Socrates did not write philosophy; he simply went around talking to people. But these conversations were so transformative that Plato devoted his life to writing dialogues that represent Socrates in conversation. These dialogues are not transcripts of actual conversations, but they are nonetheless clearly intended to reflect not only Socrates’s ideas but his personality. Plato wanted the world to remember Socrates. Generations after Socrates’s death, warring philosophical schools such as the Stoics and the Skeptics each appropriated Socrates as figurehead. Though they disagreed on just about every point of doctrine, they were clear that in order to count themselves as philosophers they had to somehow be working in the tradition of Socrates.

What is it about Socrates that made him into a symbol for the whole institution of philosophy? Consider the fact that, when the Oracle at Delphi proclaims Socrates wisest of men, he tries to prove it wrong. As Plato recounts it in the Apology:

I went to one of those reputed wise, thinking that there, if anywhere, I could refute the oracle and say to it: “This man is wiser than I, but you said I was.” Then, when I examined this man—there is no need for me to tell you his name, he was one of our public men—my experience was something like this: I thought that he appeared wise to many people and especially to himself, but he was not. I then tried to show him that he thought himself wise, but that he was not. As a result he came to dislike me, and so did many of the bystanders. So I withdrew and thought to myself: “I am wiser than this man; it is likely that neither of us knows anything worthwhile, but he thinks he knows something when he does not, whereas when I do not know, neither do I think I know; so I am likely to be wiser than he to this small extent, that I do not think I know what I do not know.”

If Socrates’s trademark claim is this protestation of ignorance, his trademark activity is the one also described in this passage: refuting the views of others. These are the conversations we find in Plato’s texts. How are the claim and the activity related? Socrates denies that his motivations are altruistic: he says he is not a teacher, and insists that he is himself the primary beneficiary of the conversations he initiates. This adds to the mystery: What is Socrates getting out of showing people that they don’t know what they take themselves to know? What’s his angle?

Over and over again, Socrates approaches people who are remarkable for their lack of humility—which is to say, for the fact that they feel confident in their own knowledge of what is just, or pious, or brave, or moderate. You might have supposed that Socrates, whose claim to fame is his awareness of his own ignorance, would treat these self-proclaimed “wise men” (Sophists) with contempt, hostility, or indifference. But he doesn’t. The most remarkable feature of Socrates’s approach is his punctilious politeness and sincere enthusiasm. The conversation usually begins with Socrates asking his interlocutor: Since you think you know, can you tell me, what is courage (or wisdom, or piety, or justice . . .)? Over and over again, it turns out that they think they can answer, but they can’t. Socrates’s hope springs eternal: even as he walks toward the courtroom to be tried—and eventually put to death—for his philosophical activity, he is delighted to encounter the self-important priest Euthyphro, who will, surely, be able to say what piety is…

Socrates seemed to think that the people around him could help him acquire the knowledge he so desperately wanted—even though they were handicapped by the illusion that they already knew it. Indeed, I believe that their ill-grounded confidence was precisely what drew Socrates to them. If you think you know something, you will be ready to speak on the topic in question. You will hold forth, spout theories, make claims—and all this, under Socrates’s relentless questioning, is the way to actually acquire the knowledge you had deluded yourself into thinking you already had…

It’s one thing to say, “I don’t know anything.” That thought comes cheap. One can wonder, “Who really and truly knows anything?” in a way that is dismissive, uninquisitive, detached. It can be a way of saying, “Knowledge is unattainable, so why even try?” Socratic humility is more expensive and more committal than that. He sought to map the terrain of his ignorance, to plot its mountains and its rivers, to learn to navigate it. That, I think, is why he speaks of knowledge of his own ignorance. He’s not just someone who acknowledges or admits to his ignorance, but someone who has learned to dwell within it.

Admittedly, this may seem like a paradoxical project. It’s one thing to be missing your wallet—you will know it once you’ve found it. But suppose you’re missing not only your wallet, but also the knowledge that you ever had a wallet, and the understanding of what a wallet is. One of Socrates’s interlocutors, Meno, doubts whether it’s possible to come to know anything if you know so little to begin with. If someone doesn’t know where she’s going, it doesn’t seem as though she can even take a first step in the right direction. Can you map in total darkness?

Socrates’s answer was no. Or at least: you can’t do it alone. The right response to noticing one’s own ignorance is to try to escape it by acquiring someone else’s knowledge. But the only way to do that is to explain to them why you aren’t yet able to accept this or that claim of theirs as knowledge—and that is what mapping one’s ignorance amounts to. Socrates stages an exhibition of this method for Meno by demonstrating how much geometrical progress he can make with a young slave boy by doing nothing but asking questions that expose the boy’s false assumptions. It is when he refutes others’ claims to knowledge that Socrates’s own ignorance takes shape, for him, as something he can know. What appears as a sea of darkness when approached introspectively turns out to be navigable when brought into contact with the knowledge claims of another…

Socrates saw the pursuit of knowledge as a collaborative project involving two very different roles. There’s you or I or some other representative of Most People, who comes forward and makes a bold claim. Then there’s Socrates, or one of his contemporary descendants, who questions and interrogates and distinguishes and calls for clarification. This is something we’re often still doing—as philosophers, as scientists, as interviewers, as friends, on Twitter and Facebook and in many casual personal conversations. We’re constantly probing one another, asking, “How can you say that, given X, Y, Z?” We’re still trying to understand one another by way of objection, clarification, and the simple fact of inability to take what someone has said as knowledge. It comes so naturally to us to organize ourselves into the knower/objector pairing that we don’t even notice we are living in the world that Socrates made. The scope of his influence is remarkable. But equally remarkable is the means by which it was achieved: he did so much by knowing, writing, and accomplishing—nothing at all.

And yet for all this influence, many of our ways are becoming far from Socratic. More and more our politics are marked by unilateral persuasion instead of collaborative inquiry. If, like Socrates, you view knowledge as an essentially collaborative project, you don’t go into a conversation expecting to persuade any more than you expect to be persuaded. By contrast, if you do assume you know, you embrace the role of persuader in advance, and stand ready to argue people into agreement. If argument fails, you might tolerate a state of disagreement—but if the matter is serious enough, you’ll resort to enforcing your view through incentives or punishments. Socrates’s method eschewed the pressure to persuade. At the same time, he did not tolerate tolerance. His politics of humility involved genuinely opening up the question under dispute, in such a way that neither party would be permitted to close it, to settle on an answer, unless the other answered the same. By contrast, our politics—of persuasion, tolerance, incentives, and punishment—is deeply uninquisitive…

Knowing takes radical collaboration: an openness to being persuaded as much as an eagerness to persuade: “Against Persuasion,” from Agnes Callard (@AgnesCallard)

* Socrates

###

As we listen and learn, we might recall that it was on this date in 1966 that the Eleventh Plenum of the Eighth Central Committee of the Chinese Communist Party was hastily convened to advance Mao Zedong’s by then decidedly radical agenda for China– teeing up “Red August,” a series of purges of reactionary or otherwise impure thinkers. According to official statistics published in 1980, from August to September in 1966, a total of 1,772 people—including teachers and principals of many schools—were killed in Beijing by Red Guards; 33,695 homes were ransacked and 85,196 families were forced to leave the city. (1985 statistics, which included the areas immediately around Beijing, put the death toll at around 10,000.)

Red August kicked off the Great Proletarian Cultural Revolution, or as we tend to know it, The Cultural Revolution, which lasted until Mao’s death in 1976. Its stated goal was to preserve Chinese communism by purging alternative thought– remnants of capitalist and traditional elements– from Chinese society, and to impose Mao Zedong Thought (known outside China as Maoism) as the dominant ideology in the PRC. A selection of Mao’s sayings, compiled in Little Red Book, became a sacred text in what was, essentially a personality cult.

Estimates of the death toll from the Cultural Revolution, including civilians and Red Guards, vary greatly, ranging from hundreds of thousands to to 20 million. The exact figure of those who were persecuted or died during the Cultural Revolution, however, may never be known, as many deaths went unreported or were actively covered up by the police or local authorities. Tens of millions of people were persecuted (especially members of ethnic minorities): senior officials were purged or exiled; millions were accused of being members of the Five Black Categories, suffering public humiliation, imprisonment, torture, hard labor, seizure of property, and sometimes execution or harassment into suicide; intellectuals were considered the “Stinking Old Ninth” and were widely persecuted—notable scholars and scientists were killed or committed suicide. Schools and universities were closed. And over 10 million urban “intellectual youths were sent to rural areas in the Down to the Countryside Movement.

source

Written by (Roughly) Daily

August 1, 2021 at 1:00 am

“Doing research on the Web is like using a library assembled piecemeal by pack rats and vandalized nightly”*…

But surely, argues Jonathan Zittrain, it shouldn’t be that way…

Sixty years ago the futurist Arthur C. Clarke observed that any sufficiently advanced technology is indistinguishable from magic. The internet—how we both communicate with one another and together preserve the intellectual products of human civilization—fits Clarke’s observation well. In Steve Jobs’s words, “it just works,” as readily as clicking, tapping, or speaking. And every bit as much aligned with the vicissitudes of magic, when the internet doesn’t work, the reasons are typically so arcane that explanations for it are about as useful as trying to pick apart a failed spell.

Underpinning our vast and simple-seeming digital networks are technologies that, if they hadn’t already been invented, probably wouldn’t unfold the same way again. They are artifacts of a very particular circumstance, and it’s unlikely that in an alternate timeline they would have been designed the same way.

The internet’s distinct architecture arose from a distinct constraint and a distinct freedom: First, its academically minded designers didn’t have or expect to raise massive amounts of capital to build the network; and second, they didn’t want or expect to make money from their invention.

The internet’s framers thus had no money to simply roll out a uniform centralized network the way that, for example, FedEx metabolized a capital outlay of tens of millions of dollars to deploy liveried planes, trucks, people, and drop-off boxes, creating a single point-to-point delivery system. Instead, they settled on the equivalent of rules for how to bolt existing networks together.

Rather than a single centralized network modeled after the legacy telephone system, operated by a government or a few massive utilities, the internet was designed to allow any device anywhere to interoperate with any other device, allowing any provider able to bring whatever networking capacity it had to the growing party. And because the network’s creators did not mean to monetize, much less monopolize, any of it, the key was for desirable content to be provided naturally by the network’s users, some of whom would act as content producers or hosts, setting up watering holes for others to frequent.

Unlike the briefly ascendant proprietary networks such as CompuServe, AOL, and Prodigy, content and network would be separated. Indeed, the internet had and has no main menu, no CEO, no public stock offering, no formal organization at all. There are only engineers who meet every so often to refine its suggested communications protocols that hardware and software makers, and network builders, are then free to take up as they please.

So the internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks. Tim Berners-Lee took up the invite and invented the protocols for the World Wide Web, an application to run on the internet. If your computer spoke “web” by running a browser, then it could speak with servers that also spoke web, naturally enough known as websites. Pages on sites could contain links to all sorts of things that would, by definition, be but a click away, and might in practice be found at servers anywhere else in the world, hosted by people or organizations not only not affiliated with the linking webpage, but entirely unaware of its existence. And webpages themselves might be assembled from multiple sources before they displayed as a single unit, facilitating the rise of ad networks that could be called on by websites to insert surveillance beacons and ads on the fly, as pages were pulled together at the moment someone sought to view them.

And like the internet’s own designers, Berners-Lee gave away his protocols to the world for free—enabling a design that omitted any form of centralized management or control, since there was no usage to track by a World Wide Web, Inc., for the purposes of billing. The web, like the internet, is a collective hallucination, a set of independent efforts united by common technological protocols to appear as a seamless, magical whole.

This absence of central control, or even easy central monitoring, has long been celebrated as an instrument of grassroots democracy and freedom. It’s not trivial to censor a network as organic and decentralized as the internet. But more recently, these features have been understood to facilitate vectors for individual harassment and societal destabilization, with no easy gating points through which to remove or label malicious work not under the umbrellas of the major social-media platforms, or to quickly identify their sources. While both assessments have power to them, they each gloss over a key feature of the distributed web and internet: Their designs naturally create gaps of responsibility for maintaining valuable content that others rely on. Links work seamlessly until they don’t. And as tangible counterparts to online work fade, these gaps represent actual holes in humanity’s knowledge…

The glue that holds humanity’s knowledge together is coming undone: “The Internet Is Rotting.” @zittrain explains what we can do to heal it.

(Your correspondent seconds his call to support the critically-important work of The Internet Archive and the Harvard Library Innovation Lab, along with the other initiatives he outlines.)

* Roger Ebert

###

As we protect our past for the future, we might recall that it was on this date in 1937 that Hormel introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.

source

%d bloggers like this: