(Roughly) Daily

Posts Tagged ‘Internet

“Your memory and your senses will be nourishment for your creativity”*…

Handel and Beethoven

On which senses do great creators rely? Randall Collins investigates…

Beethoven started going deaf in his late 20s.  Already famous by age 25 for his piano sonatas, at 31 he was traumatized by losing his hearing. But he kept on composing: the Moonlight Sonata during the onset of deafness; the dramatic Waldstein Sonata at 32; piano sonatas kept on coming until he was 50. In his deaf period came the revolutionary sounds of his 3rd through 8th symphonies, piano and violin concertos (age 32-40). After 44 he became less productive, with intermittent flashes (Missa Solemnis, Diabelli variations, 9th symphony) composed at 47-53, dying at 56. His last string quartets were composed entirely in his head, left unperformed in his lifetime.

Handel went blind in one eye at age 66; laboriously finished the oratorio he was working on; went completely blind at 68. He never produced another significant work. But he kept on playing organ concertos, “performing from memory, or extemporizing while the players waited for their cue” almost to the day he died, aged 74. 

Johann Sebastian Bach fell ill in his 64th year; next year his vision was nearly gone; he died at 65 “after two unsuccessful operations for a cataract.”  At 62 he was still producing great works; at 64 he finished assembling the pieces of his B Minor Mass (recycling his older works being his modus operandi). At death he left unfinished his monument of musical puzzles, The Art of the Fugue, on which he had been working since 55.

Can we conclude, it is more important for a composer to see than hear?…

And given examples like Milton, that it’s more critical to poets and writers to hear than see? More at “Deaf or Blind: Beethoven, Handel,” from @sociologicaleye.

* Arthur Rimbaud

###

As we contemplate creativity, we might recall that it was on this date in 2013 that Google– Google Search, YouTube, Google Mail, and Google Drive, et al.– went down for about 5 minutes. During that brief window, internet traffic around the world dropped by 40 percent.

“All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason.”*…

Descartes, the original (modern) Rationalist and Immanuel Kant, who did his best to synthesize Descartes’ thought with empiricism (a la Hume)

As Robert Cottrell explains, a growing group of online thinkers couldn’t agree more…

Much of the best new writing online originates from activities in the real world — music, fine art, politics, law…

But there is also writing which belongs primarily to the world of the Internet, by virtue of its subject-matter and of its sensibility. In this category I would place the genre that calls itself Rationalism, the raw materials of which are cognitive science and mathematical logic.

I will capitalise Rationalism and Rationalists when referring to the writers and thinkers who are connected in one way or another with the Less Wrong forum (discussed below). I will do this to avoid confusion with the much broader mass of small-r “rational” thinkers — most of us, in fact — who believe their thinking to be founded on reasoning of some sort; and with “rationalistic” thinkers, a term used in the social sciences for people who favour the generalised application of scientific methods.

Capital-R Rationalism contends that there are specific techniques, drawn mainly from probability theory, by means of which people can teach themselves to think better and to act better — where “better” is intended not as a moral judgement but as a measure of efficiency. Capital-R Rationalism contends that, by recognising and eliminating biases common in human judgement, one can arrive at a more accurate view of the world and a more accurate view of one’s actions within it. When thus equipped with a more exact view of the world and of ourselves, we are far more likely to know what we want and to know how to get it.

Rationalism does not try to substitute for morality. It stops short of morality. It does not tell you how to feel about the truth once you think you have found it. By stopping short of morality it has the best of both worlds: It provides a rich framework for thought and action from which, in principle, one might advance, better equipped, into metaphysics. But the richness and complexity of deciding how to act Rationally in the world is such that nobody, having seriously committed to Rationalism, is ever likely to emerge on the far side of it.

The influence of Rationalism today is, I would say, comparable with that of existentialism in the mid-20th century. It offers a way of thinking and a guide to action with particular attractions for the intelligent, the dissident, the secular and the alienated. In Rationalism it is perfectly reasonable to contend that you are right while the World is wrong.

Rationalism is more of an applied than a pure discipline, so its effects are felt mainly in fields where its adepts tend to be concentrated. By far the highest concentration of Rationalists would appear to cohabit in the study and development of artificial intelligence; so it hardly surprising that main fruit of Rationalism to date has been the birth of a new academic field, existential risk studies, born of a convergence between Rationalism and AI, with science fiction playing catalytic role. Leading figures in existential risk studies include Nicholas Bostrom at Oxford University and Jaan Tallinn at Cambridge University.

Another relatively new field, effective altruism, has emerged from a convergence of Rationalism and Utilitarianism, with the philosopher Peter Singer as catalyst. The leading figures in effective altruism, besides Singer, are Toby Ord, author of The Precipice; William MacAskill, author of Doing Good Better; and Holden Karnofsky, co-founder of GiveWell and blogger at Cold Takes.

A third new field, progress studies, has emerged very recently from the convergence of Rationalism and economics, with Tyler Cowen and Patrick Collison as its founding fathers. Progress studies seeks to identify, primarily from the study of history, the preconditions and factors which underpin economic growth and technological innovation, and to apply these insights in concrete ways to the promotion of future prosperity. The key text of progress studies is Cowen’s Stubborn Attachments

I doubt there is any wholly original scientific content to Rationalism: It is a taker of facts from other fields, not a contributor to them. But by selecting and prioritising ideas which play well together, by dramatising them in the form of thought experiments, and by pursuing their applications to the limits of possibility (which far exceed the limits of common sense), Rationalism has become a contributor to the philosophical fields of logic and metaphysics and to conceptual aspects of artificial intelligence.

Tyler Cowen is beloved of Rationalists but would hesitate (I think) to identify with them. His attitude towards cognitive biases is more like that of Chesterton towards fences: Before seeking to remove them you should be sure that you understand why they were put there in the first place…

From hands-down the best guide I’ve found to the increasingly-impactful ideas at work in Rationalism and its related fields, and to the thinkers behind them: “Do the Right Thing,” from @robertcottrell in @TheBrowser. Eminently worth reading in full.

[Image above: source]

* Immanuel Kant, Critique of Pure Reason

###

As we ponder precepts, we might recall that it was on this date in 1937 that Hormel went public with its own exercise in recombination when it introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.

source

Written by (Roughly) Daily

July 5, 2022 at 1:00 am

“History is who we are and why we are the way we are”*…

What a long, strange trip it’s been…

March 12, 1989 Information Management, a Proposal

While working at CERN, Tim Berners-Lee first comes up with the idea for the World Wide Web. To pitch it, he submits a proposal for organizing scientific documents to his employers titled “Information Management, a Proposal.” In this proposal, Berners-Lee sketches out what the web will become, including early versions of the HTTP protocol and HTML.

The first entry a timeline that serves as a table of contents for a series of informative blog posts: “The History of the Web,” from @jay_hoffmann.

* David McCullough

###

As we jack in, we might recall that it was on this date in 1969 that the world first learned of what would become the internet, which would, in turn, become that backbone of the web: UCLA announced it would “become the first station in a nationwide computer network which, for the first time, will link together computers of different makes and using different machine languages into one time-sharing system.” It went on to say that “Creation of the network represents a major forward step in computer technology and may server as the forerunner of large computer networks of the future.”

UCLA will become the first station in a nationwide computer network which, for the first time, will link together computers of different makes and using different machine languages into one time-sharing system.

Creation of the network represents a major forward step in computer technology and may serve as the forerunner of large computer networks of the future.

The ambitious project is supported by the Defense Department’s Advanced Research Project Agency (ARPA), which has pioneered many advances in computer research, technology and applications during the past decade. The network project was proposed and is headed by ARPA’s Dr. Lawrence G. Roberts.

The system will, in effect, pool the computer power, programs and specialized know-how of about 15 computer research centers, stretching from UCLA to M.I.T. Other California network stations (or nodes) will be located at the Rand Corp. and System Development Corp., both of Santa Monica; the Santa Barbara and Berkeley campuses of the University of California; Stanford University and the Stanford Research Institute.

The first stage of the network will go into operation this fall as a subnet joining UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah. The entire network is expected to be operational in late 1970.

Engineering professor Leonard Kleinrock [see here], who heads the UCLA project, describes how the network might handle a sample problem:

Programmers at Computer A have a blurred photo which they want to bring into focus. Their program transmits the photo to Computer B, which specializes in computer graphics, and instructs B’s program to remove the blur and enhance the contrast. If B requires specialized computational assistance, it may call on Computer C for help.

The processed work is shuttled back and forth until B is satisfied with the photo, and then sends it back to Computer A. The messages, ranging across the country, can flash between computers in a matter of seconds, Dr. Kleinrock says.

UCLA’s part of the project will involve about 20 people, including some 15 graduate students. The group will play a key role as the official network measurement center, analyzing computer interaction and network behavior, comparing performance against anticipated results, and keeping a continuous check on the network’s effectiveness. For this job, UCLA will use a highly specialized computer, the Sigma 7, developed by Scientific Data Systems of Los Angeles.

Each computer in the network will be equipped with its own interface message processor (IMP) which will double as a sort of translator among the Babel of computer languages and as a message handler and router.

Computer networks are not an entirely new concept, notes Dr. Kleinrock. The SAGE radar defense system of the Fifties was one of the first, followed by the airlines’ SABRE reservation system. At the present time, the nation’s electronically switched telephone system is the world’s largest computer network.

However, all three are highly specialized and single-purpose systems, in contrast to the planned ARPA system which will link a wide assortment of different computers for a wide range of unclassified research functions.

“As of now, computer networks are still in their infancy,” says Dr. Kleinrock. “But as they grow up and become more sophisticated, we will probably see the spread of ‘computer utilities’, which, like present electronic and telephone utilities, will service individual homes and offices across the country.”

source
Boelter Hall, UCLA

source

Written by (Roughly) Daily

July 3, 2022 at 1:00 am

“The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function”*…

The Long Tail

One the one hand: Ted Gioia suggests that, while ‘The Long Tail’ was supposed to boost alternative voices in music, movies, and books, the exact opposite has happened…

When I first heard people predict the rise of the Long Tail, I was amused. Not only did it seem wrong-headed, but it ran counter to everything I saw happening around me.

It pains me to say this—because the Long Tail was sold to us as an economic law that not only predicted a more inclusive era of prosperity, but would especially help creative people. According to its proponents, the Long Tail would revitalize our culture by expanding the scope of the arts and giving a boost to visionaries on the fringes of society.

Alternative voices would be nurtured and flourish. Music would get cooler and more surprising. Books would become more diverse and interesting. Indie films would reach larger audiences. Etc. etc. etc.

Hey, what’s not to like?

But it never happened. More to the point, it was never going to happen because the story was a fairy tale. I knew it back then because I had been hired on a number of occasions to analyze the Long Tail myself. But the flaws in the reasoning are far more obvious today, even to me.

Nonetheless many believed it—and many still do. So it’s worth digging into the story of the Long Tail, and examining exactly why it never delivered its promise.

And maybe we can find some alternative pathway to that lost cultural renaissance by seeing how this one went off the rails.

On the other hand: Cal Newport suggest that Kevin Kelly‘s fourteen-year-old prediction that an artist could make a living online with a thousand true fans is (finally) coming true…

In his “1,000 True Fans” essay, Kelly explains that he wasn’t as excited about this new economic model as others seemed to be. “The long tail is famously good news for two classes of people: a few lucky aggregators, such as Amazon and Netflix, and 6 billion consumers,” he writes. “But the long tail is a decidedly mixed blessing for creators.” If your work lives in the long tail, the introduction of Internet-based markets might mean that you go from selling zero units of your creations to selling a handful of units a month, but this makes little difference to your livelihood. “The long tail offers no path out of the quiet doldrums of minuscule sales,” Kelly writes. “Other than aim for a blockbuster hit, what can an artists do to escape the long tail?”

This question might seem fatalistic, but Kelly had a solution. If your creative work exists in the long tail, generating a small but consistent number of sales, then it’s probably sufficiently good to support a small but serious fan base, assuming you’re willing to put in the work required to cultivate this community. In an earlier age, a creative professional might be limited to fans who lived nearby. But by using the tools of the Internet, Kelly argued, it was now possible for creative types to both find and interact with supporters all around the world…

A shining example of the 1,000 True Fans model is the podcasting boom. There are more than eight hundred and fifty thousand active podcasts available right now. Although most of these shows are small and don’t generate much money, the number of people making a full-time living off original audio content is substantial. The key to a financially viable podcast is to cultivate a group of True Fans eager to listen to every episode. The value of each such fan, willing to stream hours and hours of a creator’s content, is surprisingly large; if sufficiently committed, even a modest-sized audience can generate significant income for a creator. According to an advertising agency I consulted, for example, a weekly podcast that generates thirty thousand downloads per episode should be able to reach Kelly’s target of generating a hundred thousand dollars a year in income. Earning a middle-class salary by talking through a digital microphone to a fiercely loyal band of supporters around the world, who are connected by the magic of the Internet, is about as pure a distillation of Kelly’s vision as you’re likely to find…

The real breakthroughs that enabled the revival of the 1,000 True Fans model are better understood as cultural. The rise in both online news paywalls and subscription video-streaming services trained users to be more comfortable paying à la carte for content. When you already shell out regular subscription fees for newyorker.com, Netflix, Peacock, and Disney+, why not also pay for “Breaking Points,” or throw a monthly donation toward Maria Popova? In 2008, when Kelly published the original “1,000 True Fans” essay, it was widely assumed that it would be hard to ever persuade people to pay money for most digital content. (This likely explains why so many of Kelly’s examples focus on selling tangible goods, such as DVDs or custom prints.) This is no longer true. Opening up these marketplaces to purely digital artifacts—text, audio, video, online classes—significantly lowered the barriers to entry for creative professionals looking to make a living online…

But can this last? Is it destined to fall prey to the forces that Gioia catalogues?

The recent history of the Internet, however, warns that we shouldn’t necessarily expect the endearingly homegrown nature of these 1,000 True Fans communities to persist. When viable new economic niches emerge online, venture-backed businesses, looking to extract their cut, are typically not far behind. Services such as Patreon and Kickstarter are jostling for a dominant position in this direct-to-consumer creative marketplace. A prominent recent example of such attempts to centralize the True Fan economy is Substack, which eliminates friction for writers who want to launch paid e-mail newsletters. Substack now has more than a million subscribers who pay for access to newsletters, and is currently valued at around six hundred and fifty million dollars. With this type of money at stake, it’s easy to imagine a future in which a small number of similarly optimized platforms dominate most of the mechanisms by which creative professionals interact with their 1,000 True Fans. In the optimistic scenario, this competition will lead to continued streamlining of the process of serving supporters, increasing the number of people who are able to make a good living off of their creative work: an apotheosis of sorts of Kelly’s original vision. A more pessimistic prediction is that the current True Fan revolution will eventually go the way of the original Web 2.0 revolution, with creators increasingly ground in the gears of monetization. The Substack of today makes it easy for a writer to charge fans for a newsletter. The Substack of tomorrow might move toward a flat-fee subscription model, driving users toward an algorithmically optimized collection of newsletter content, concentrating rewards within a small number of hyper-popular producers, and in turn eliminating the ability for any number of niche writers to make a living…

The future of the creative economy: “Where Did the Long Tail Go?,” from @tedgioia and “The Rise of the Internet’s Creative Middle Class,” from Cal Newport on @kevin2kelly in @NewYorker.

* F. Scott Fitzgerald (“The Crack-Up,” Esquire, February, 1936)

###

As we contemplate culture and commerce, we might recall that it was on this date in 1894 (after 30 states had already enshrined the occasion) that Labor Day became a federal holiday in the United States.

labor day
The country’s first Labor Day parade in New York City on Sept. 5, 1882. This sketch appeared in Frank Leslie’s Illustrated Newspaper.

source (and source of more on the history of Labor Day)

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing

###

As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame

source

%d bloggers like this: