(Roughly) Daily

Posts Tagged ‘knowledge

“Doing research on the Web is like using a library assembled piecemeal by pack rats and vandalized nightly”*…

But surely, argues Jonathan Zittrain, it shouldn’t be that way…

Sixty years ago the futurist Arthur C. Clarke observed that any sufficiently advanced technology is indistinguishable from magic. The internet—how we both communicate with one another and together preserve the intellectual products of human civilization—fits Clarke’s observation well. In Steve Jobs’s words, “it just works,” as readily as clicking, tapping, or speaking. And every bit as much aligned with the vicissitudes of magic, when the internet doesn’t work, the reasons are typically so arcane that explanations for it are about as useful as trying to pick apart a failed spell.

Underpinning our vast and simple-seeming digital networks are technologies that, if they hadn’t already been invented, probably wouldn’t unfold the same way again. They are artifacts of a very particular circumstance, and it’s unlikely that in an alternate timeline they would have been designed the same way.

The internet’s distinct architecture arose from a distinct constraint and a distinct freedom: First, its academically minded designers didn’t have or expect to raise massive amounts of capital to build the network; and second, they didn’t want or expect to make money from their invention.

The internet’s framers thus had no money to simply roll out a uniform centralized network the way that, for example, FedEx metabolized a capital outlay of tens of millions of dollars to deploy liveried planes, trucks, people, and drop-off boxes, creating a single point-to-point delivery system. Instead, they settled on the equivalent of rules for how to bolt existing networks together.

Rather than a single centralized network modeled after the legacy telephone system, operated by a government or a few massive utilities, the internet was designed to allow any device anywhere to interoperate with any other device, allowing any provider able to bring whatever networking capacity it had to the growing party. And because the network’s creators did not mean to monetize, much less monopolize, any of it, the key was for desirable content to be provided naturally by the network’s users, some of whom would act as content producers or hosts, setting up watering holes for others to frequent.

Unlike the briefly ascendant proprietary networks such as CompuServe, AOL, and Prodigy, content and network would be separated. Indeed, the internet had and has no main menu, no CEO, no public stock offering, no formal organization at all. There are only engineers who meet every so often to refine its suggested communications protocols that hardware and software makers, and network builders, are then free to take up as they please.

So the internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks. Tim Berners-Lee took up the invite and invented the protocols for the World Wide Web, an application to run on the internet. If your computer spoke “web” by running a browser, then it could speak with servers that also spoke web, naturally enough known as websites. Pages on sites could contain links to all sorts of things that would, by definition, be but a click away, and might in practice be found at servers anywhere else in the world, hosted by people or organizations not only not affiliated with the linking webpage, but entirely unaware of its existence. And webpages themselves might be assembled from multiple sources before they displayed as a single unit, facilitating the rise of ad networks that could be called on by websites to insert surveillance beacons and ads on the fly, as pages were pulled together at the moment someone sought to view them.

And like the internet’s own designers, Berners-Lee gave away his protocols to the world for free—enabling a design that omitted any form of centralized management or control, since there was no usage to track by a World Wide Web, Inc., for the purposes of billing. The web, like the internet, is a collective hallucination, a set of independent efforts united by common technological protocols to appear as a seamless, magical whole.

This absence of central control, or even easy central monitoring, has long been celebrated as an instrument of grassroots democracy and freedom. It’s not trivial to censor a network as organic and decentralized as the internet. But more recently, these features have been understood to facilitate vectors for individual harassment and societal destabilization, with no easy gating points through which to remove or label malicious work not under the umbrellas of the major social-media platforms, or to quickly identify their sources. While both assessments have power to them, they each gloss over a key feature of the distributed web and internet: Their designs naturally create gaps of responsibility for maintaining valuable content that others rely on. Links work seamlessly until they don’t. And as tangible counterparts to online work fade, these gaps represent actual holes in humanity’s knowledge…

The glue that holds humanity’s knowledge together is coming undone: “The Internet Is Rotting.” @zittrain explains what we can do to heal it.

(Your correspondent seconds his call to support the critically-important work of The Internet Archive and the Harvard Library Innovation Lab, along with the other initiatives he outlines.)

* Roger Ebert


As we protect our past for the future, we might recall that it was on this date in 1937 that Hormel introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.


“Do not explain your philosophy. Embody it.”*…

Truth, knowledge, justice – to understand how our loftiest abstractions earn their keep, trace them to their practical origins…

Unlike ideas of air, food and water that allow us to think about the everyday resources we need to survive, the venerable notions of knowledge, truth or justice don’t obviously cater to practical needs. On the contrary, these exalted ideals draw our gaze away from practical pursuits. They are imbued with grandeur precisely because of their superb indifference to mundane human concerns. Having knowledge is practically useful, but why would we also need the concept of knowledge? The dog who knows where his food is seems fine without the concept of knowledge, so long as he’s not called upon to give a commencement address. And yet the concepts of knowledge, truth or justice appear to have been important enough to emerge across different cultures and endure over the ages. Why, then, did we ever come to think in these terms?

Friedrich Nietzsche grumbled that, when it came to identifying the origins of lofty ideas, philosophers had a tendency to be led astray by their own respect for them. In dealing with what they felt were the ‘highest concepts’, the ‘last wisps of smoke from the evaporating end of reality’, they had reverently placed them ‘at the beginning as the beginning’, convinced that the higher could never have grown out of the lower: Plato’s eternal Forms, the mind of God, Immanuel Kant’s noumenal world – they had all served as cradles to higher concepts, offering them a suitably distinguished pedigree.

But to insist that higher concepts were bound to have higher origins, Nietzsche thought, was to let one’s respect for those ideas get in the way of a truthful understanding of them. If, after the ‘Death of God’ and the advent of Darwinism, we were successfully to ‘translate humanity back into nature’, as Nietzsche’s felicitous rallying cry had it, we needed to trace seemingly transcendent ideas such as knowledge, truth or justice to their roots in human concerns. Their origins weren’t empyrean (to be sought in the highest spheres) but distinctly sublunary (found in lowly practical needs). Nietzsche encouraged us to ask: what necessities might have been the mothers of those inventions? And what, if anything, do they still do for us?…

Matthieu Queloz (@matthieu_queloz) takes up Nietzsche‘s challenge: “Ideas that work.”

[image above: source]

* Epictetus


As we root out first principles, we might spare a thought for Sir Alfred Jules “Freddie” Ayer (usually cited as A.J. Ayer); he died on this date in 1989. A philosopher associated with the the British humanist movement, he is best remembered as the champion of of logical positivism, particularly in his books Language, Truth, and Logic (1936) and The Problem of Knowledge (1956). While he had a number of material disagreements with Nietzsche, Ayer shared his rejection of objective ethical values.


“Trivia is a fact without a home”*…

What makes for a good trivia question? There are some common-sense requirements. It should be clearly written, accurate, and gettable for at least some people. (Acceptable degrees of difficulty vary.) It must be properly “pinned” to its answer, meaning that there are no correct responses other than those the questioner is seeking. (This can be trickier than you might think.) In the opinion of Shayne Bushfield, the creator and sole full-time employee of LearnedLeague, an online trivia community that he has run since 1997, people should recognize the answer to the question as something worth knowing, as having a degree of importance. “Trivia is not the right word for it,” he told me recently. “Because trivia technically means trivial, or not worth knowing, and it’s the opposite.”

The idea that the answers to trivia questions are worth knowing is a matter of some debate, and has been more or less since trivia itself was born. The pop-culture pastime of quizzing one another on a variety of subjects as a kind of game is fundamentally a phenomenon of the past hundred years or so: its first appearance as a fad seems to date to 1927, when “Ask Me Another! The Question Book” was published. As the “Jeopardy!” champion Ken Jennings notes in his book “Brainiac,” “Ask Me Another” was written by “two out-of-work Amherst alumni” living in Manhattan, who “were shocked to find that, despite their fancy new diplomas and broad liberal educations, the job world wasn’t beating a path to their door.” Their book was a hit, and newspapers began running quiz columns, a follow-up of sorts to the national crossword craze of a couple of years before. Quiz shows came to radio and television about a decade later. But none of these games were called trivia until a pair of Columbia undergraduates, in the mid-sixties, shared their version of the game, first in the school’s Daily Spectator and later in their own popular quiz book, which really did prize the trivial: the name of the Lone Ranger’s nephew, the name of the snake that appeared in “We’re No Angels,” and so on. This version of trivia was all about the stuff one had read, listened to, or watched as a kid, and its appeal, according to one of the Columbia pair, was concentrated among “young adults who on the one hand realize they have misspent their youth and yet, on the other hand, do not want to let go of it.” The purpose of playing, he explained, was experiencing the feeling produced when an answer finally came to you, “an effect similar to the one that might be induced by a pacifier.”

Presumably, it has always been satisfying to know things, but the particular pleasure of trivia seems to depend on two relatively recent developments: the constant relaying of new information (i.e., mass media) and the mass production of people who learn a lot of things they don’t really need to know. (College attendance began steadily rising in the nineteen-twenties, before booming after the Second World War.) It is sometimes asked whether the popularity of trivia will diminish in the age of Google and Siri, but those earlier developments have only accelerated, and trivia seems, if anything, more popular than ever. In contrast to the mindless ease of looking up the answer to a question online, there’s a gratifying friction in pulling a nearly forgotten fact from your own very analog brain…

The quietly oppositional delight of knowing things you don’t need to know: “The Pleasures of LearnedLeague and the Spirit of Trivia.”

Don Rittner


As we revel in the rarefied, we might celebrate the answer to a tough trivia question: today is the birthday of John McClane, the protagonist of the Die Hard films; he was “born” on this date in 1955.


Written by (Roughly) Daily

May 23, 2021 at 1:01 am

“So many books, so little time”*…

Dear The Sophist, 

I own a lot of books, and nearly enough shelves to fit them. I haven’t read most of them—has anyone with a lot of books read most of them?—yet I still get impulses to buy more. Can you please tell me why it’s OK for me to buy more books? I should add that I live with a partner who doesn’t own a lot of books, but tolerates mine so far. So far.


Dear Volume Purchaser,

Books are ridiculous objects to buy, aren’t they? For the sake of spending a day or two, maybe a week, with some author’s thoughts and words, you take custody of this physical item that sticks around, and around, as more and more others accumulate along with it. You look at them, almost unseeingly, day after day; the walls of your rooms press in; you pay extra money to the movers to drag the extra weight around from one dwelling to the next, all because you read an interesting review once or a cover caught your eye in a bookstore.  

You know what else is ridiculous? The sheer impermanence of thought. The constant yet ephemeral flickering of partial understanding across the synapses in our wet and mortal brains, and the dry circuits of the junky and even more short-lived electronic ersatz brains we rely on for backup. A book is an investment against forgetting and death—a poor investment, but it beats the alternatives. It is a slippery yet real toehold on eternity,,, If you stop the flow of new books, you stop this flow of possibilities…

Too many books? Tom Scocca (@tomscocca) explains that there’s no such thing as too many books. (via the ever-illuminating Today in Tabs)

And lest one fear that the only option is to buy books, remember the Public Library…

Central Library, Kansas City (source)

* Frank Zappa


As we reorganize our shelves, we might spare a thought for someone whose works definitely deserve places of honor thereon, Octavia Estelle Butler; she died in this date in 2006. An African American woman science fiction author, she was a rarity in her field. But her primary distinction was her extraordinary talent, as manifest in novels and stories that stretch the imagination even as they explore the all-too-real truths of the human condition. She was a multiple recipient of both the Hugo and Nebula awards, and became (in 1995) the first science-fiction writer to receive a MacArthur Fellowship.

It’s measure of her insight that her work– perhaps especially her “Parable” series— is being re-discovered as painfully prescient of our current times.


Written by (Roughly) Daily

February 24, 2021 at 1:01 am

“Facts alone, no matter how numerous or verifiable, do not automatically arrange themselves into an intelligible, or truthful, picture of the world. It is the task of the human mind to invent a theoretical framework to account for them.”*…

PPPL physicist Hong Qin in front of images of planetary orbits and computer code

… or maybe not. A couple of decades ago, your correspondent came across a short book that aimed to explain how we think know what we think know, Truth– a history and guide of the perplexed, by Felipe Fernández-Armesto (then, a professor of history at Oxford; now, at Notre Dame)…

According to Fernández-Armesto, people throughout history have sought to get at the truth in one or more of four basic ways. The first is through feeling. Truth is a tangible entity. The third-century B.C. Chinese sage Chuang Tzu stated, ”The universe is one.” Others described the universe as a unity of opposites. To the fifth-century B.C. Greek philosopher Heraclitus, the cosmos is a tension like that of the bow or the lyre. The notion of chaos comes along only later, together with uncomfortable concepts like infinity.

Then there is authoritarianism, ”the truth you are told.” Divinities can tell us what is wanted, if only we can discover how to hear them. The ancient Greeks believed that Apollo would speak through the mouth of an old peasant woman in a room filled with the smoke of bay leaves; traditionalist Azande in the Nilotic Sudan depend on the response of poisoned chickens. People consult sacred books, or watch for apparitions. Others look inside themselves, for truths that were imprinted in their minds before they were born or buried in their subconscious minds.

Reasoning is the third way Fernández-Armesto cites. Since knowledge attained by divination or introspection is subject to misinterpretation, eventually people return to the use of reason, which helped thinkers like Chuang Tzu and Heraclitus describe the universe. Logical analysis was used in China and Egypt long before it was discovered in Greece and in India. If the Greeks are mistakenly credited with the invention of rational thinking, it is because of the effective ways they wrote about it. Plato illustrated his dialogues with memorable myths and brilliant metaphors. Truth, as he saw it, could be discovered only by abstract reasoning, without reliance on sense perception or observation of outside phenomena. Rather, he sought to excavate it from the recesses of the mind. The word for truth in Greek, aletheia, means ”what is not forgotten.”

Plato’s pupil Aristotle developed the techniques of logical analysis that still enable us to get at the knowledge hidden within us. He examined propositions by stating possible contradictions and developed the syllogism, a method of proof based on stated premises. His methods of reasoning have influenced independent thinkers ever since. Logicians developed a system of notation, free from the associations of language, that comes close to being a kind of mathematics. The uses of pure reason have had a particular appeal to lovers of force, and have flourished in times of absolutism like the 17th and 18th centuries.

Finally, there is sense perception. Unlike his teacher, Plato, and many of Plato’s followers, Aristotle realized that pure logic had its limits. He began with study of the natural world and used evidence gained from experience or experimentation to support his arguments. Ever since, as Fernández-Armesto puts it, science and sense have kept time together, like voices in a duet that sing different tunes. The combination of theoretical and practical gave Western thinkers an edge over purer reasoning schemes in India and China.

The scientific revolution began when European thinkers broke free from religious authoritarianism and stopped regarding this earth as the center of the universe. They used mathematics along with experimentation and reasoning and developed mechanical tools like the telescope. Fernández-Armesto’s favorite example of their empirical spirit is the grueling Arctic expedition in 1736 in which the French scientist Pierre Moreau de Maupertuis determined (rightly) that the earth was not round like a ball but rather an oblate spheroid…


One of Fernández-Armesto most basic points is that our capacity to apprehend “the truth”– to “know”– has developed throughout history. And history’s not over. So, your correspondent wondered, mightn’t there emerge a fifth source of truth, one rooted in the assessment of vast, ever-more-complete data maps of reality– a fifth way of knowing?

Well, those days may be upon us…

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

he algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a ‘serving algorithm,’ then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”

The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless ‘translate’ a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

“I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”

Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” [Qin’s collaborator Eric] Palmerduca said…

But then, as Edwin Hubble observed, “observations always involve theory,” theory that’s implicit in the particulars and the structure of the data being collected and fed to the AI. So, perhaps this is less a new way of knowing, than a new way of enhancing Fernández-Armesto’s third way– reason– as it became the scientific method…

The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”

In either case: “New machine learning theory raises questions about nature of science.”

Francis Bello


As we experiment with epistemology, we might send carefully-observed and calculated birthday greetings to Georg Joachim de Porris (better known by his professional name, Rheticus; he was born on this date in 1514. A mathematician, astronomer, cartographer, navigational-instrument maker, medical practitioner, and teacher, he was well-known in his day for his stature in all of those fields. But he is surely best-remembered as the sole pupil of Copernicus, whose work he championed– most impactfully, facilitating the publication of his master’s De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres)… and informing the most famous work by yesterday’s birthday boy, Galileo.


%d bloggers like this: