(Roughly) Daily

Posts Tagged ‘intellectual history

“I prefer radio to TV because the pictures are better”*…

Was Walter Benjamin the ur-podcaster? Peter E. Gordon on Benjamin’s audio adventures, how they relate to his cultural theories, and what they suggest about what has become (and may yet become) of audio…

No audio recordings of Walter Benjamin have survived. His voice was once described as beautiful, even melodious—just the sort of voice that would have been suitable for the new medium of radio broadcasting that spread across Germany in the 1920s. If one could pay the fee for a wireless receiver, Benjamin could be heard in the late afternoons or early evenings, often during what was called “Youth Hour.” His topics ranged widely, from a brass works outside Berlin to a fish market in Naples. In one broadcast, he lavished his attention on an antiquarian bookstore with aisles like labyrinths, whose walls were adorned with drawings of enchanted forests and castles. For others, he related “True Dog Stories” or perplexed his young listeners with brain teasers and riddles. He also wrote, and even acted in, a variety of radio plays that satirized the history of German literature or plunged into surrealist fantasy. One such play introduced a lunar creature named Labu who bore the august title “President of the Moon Committee for Earth Research.”

Today Benjamin is widely esteemed as one of the foremost cultural critics and theorists of the 20th century. But his career was uneven and marked by failure. In 1925, after the faculty of philosophy in Frankfurt rejected his enigmatic study of German Baroque drama and dashed his hopes for an academic career, he found himself adrift, with little assurance of a regular income. But this failure also brought freedom. His untethering from the university meant that he could indulge in his interests without restraint, and he turned his talents to writing essays that took in the whole panorama of modern life—from high literature to children’s books and from photography to film—and, for nearly six years, he supplemented his earnings with radio broadcasts, some for adults and others meant especially for children. Of the many broadcasts, about 90 in all, that he produced for the radio stations in Frankfurt and Berlin, only a fragment of a single audio recording has been preserved; unfortunately, Benjamin’s voice cannot be heard.

Now transcripts of these broadcasts have been assembled and translated into English in a new volume edited by Lecia Rosenthal, whose incisive introduction assists the reader in appreciating their true significance. One can’t help but wonder what Benjamin would have made of all this attention, since he was inclined to dismiss his radio work as unimportant. In correspondence with his friend Gershom Scholem, he wrote with some embarrassment of “piddling radio matters” and condemned nearly all of it as having “no interest except in economic terms.” Today we know that he was mistaken. The transcripts are more than mere ephemera; they are perfect specimens of Benjamin’s interpretative method, exercises in a style of urban semiotics that he would later apply during his exile in Paris. Hannah Arendt once likened her late friend to a pearl diver who possessed a gift for diving into the wreckage of bourgeois civilization and emerging into the sunlight with the rarest of treasures. The radio transcripts offer further evidence of a genius whose career was ended far too soon…

A fascinating– and illuminating– read. Walter Benjamin’s radio years: “President of the Moon Committee,” from @thenation.

* Alistair Cooke

###

As we listen in, we might spare a thought for Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884. Gernsback founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio.  And he was a prolific inventor, with 80 patents at the time of his death.

But it was a writer and publisher that he probably left his most lasting mark:  In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.

The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories.  Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”

Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”

Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”

Gernsback, wearing his invention, TV Glasses

 source

“The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust.”*…

Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity — a place where extrapolation breaks down and new models must be applied — and the world will pass beyond our understanding.

Vernor Vinge, True Names and Other Dangers

The once-vibrant transhumanist movement doesn’t capture as much attention as it used to; but as George Dvorsky explains, its ideas are far from dead. Indeed, they helped seed the Futurist movements that are so prominent today (and here and here)…

[On the heels of 9/11] transhumanism made a lot of sense to me, as it seemed to represent the logical next step in our evolution, albeit an evolution guided by humans and not Darwinian selection. As a cultural and intellectual movement, transhumanism seeks to improve the human condition by developing, promoting, and disseminating technologies that significantly augment our cognitive, physical, and psychological capabilities. When I first stumbled upon the movement, the technological enablers of transhumanism were starting to come into focus: genomics, cybernetics, artificial intelligence, and nanotechnology. These tools carried the potential to radically transform our species, leading to humans with augmented intelligence and memory, unlimited lifespans, and entirely new physical and cognitive capabilities. And as a nascent Buddhist, it meant a lot to me that transhumanism held the potential to alleviate a considerable amount of suffering through the elimination of disease, infirmary, mental disorders, and the ravages of aging.

The idea that humans would transition to a posthuman state seemed both inevitable and desirable, but, having an apparently functional brain, I immediately recognized the potential for tremendous harm.

The term “transhumanism” popped into existence during the 20th century, but the idea has been around for a lot longer than that.

The quest for immortality has always been a part of our history, and it probably always will be. The Mesopotamian Epic of Gilgamesh is the earliest written example, while the Fountain of Youth—the literal Fountain of Youth—was the obsession of Spanish explorer Juan Ponce de León.

Notions that humans could somehow be modified or enhanced appeared during the European Enlightenment of the 18th century, with French philosopher Denis Diderot arguing that humans might someday redesign themselves into a multitude of types “whose future and final organic structure it’s impossible to predict,” as he wrote in D’Alembert’s Dream

The Russian cosmists of the late 19th and early 20th centuries foreshadowed modern transhumanism, as they ruminated on space travel, physical rejuvenation, immortality, and the possibility of bringing the dead back to life, the latter being a portend to cryonics—a staple of modern transhumanist thinking. From the 1920s through to the 1950s, thinkers such as British biologist J. B. S. Haldane, Irish scientist J. D. Bernal, and British biologist Julian Huxley (who popularized the term “transhumanism” in a 1957 essay) were openly advocating for such things as artificial wombs, human clones, cybernetic implants, biological enhancements, and space exploration.

It wasn’t until the 1990s, however, that a cohesive transhumanist movement emerged, a development largely brought about by—you guessed it—the internet…

[There follows a brisk and helpful history of transhumanist thought, then an account of the recent past, and present…]

Some of the transhumanist groups that emerged in the 1990s and 2000s still exist or evolved into new forms, and while a strong pro-transhumanist subculture remains, the larger public seems detached and largely disinterested. But that’s not to say that these groups, or the transhumanist movement in general, didn’t have an impact…

“I think the movements had mainly an impact as intellectual salons where blue-sky discussions made people find important issues they later dug into professionally,” said Sandberg. He pointed to Oxford University philosopher and transhumanist Nick Bostrom, who “discovered the importance of existential risk for thinking about the long-term future,” which resulted in an entirely new research direction. The Center for the Study of Existential Risk at the University of Cambridge and the Future of Humanity Institute at Oxford are the direct results of Bostrom’s work. Sandberg also cited artificial intelligence theorist Eliezer Yudkowsky, who “refined thinking about AI that led to the AI safety community forming,” and also the transhumanist “cryptoanarchists” who “did the groundwork for the cryptocurrency world,” he added. Indeed, Vitalik Buterin, a co-founder of Ethereum, subscribes to transhumanist thinking, and his father, Dmitry, used to attend our meetings at the Toronto Transhumanist Association…

Intellectual history: “What Ever Happened to the Transhumanists?,” from @dvorsky.

See also: “The Heaven of the Transhumanists” from @GenofMod (source of the image above).

Donna Haraway

###

As we muse on mortality, we might send carefully-calculated birthday greetings to Marvin Minsky; he was born on this date in 1927.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

 source

“Philosophy is a battle against the bewitchment of our intelligence by means of language”*…

Clockwise from top: Iris Murdoch, Philippa Foot, Mary Midgley, Elizabeth Anscombe

How four women defended ethical thought from the legacy of positivism…

By Michaelmas Term 1939, mere weeks after the United Kingdom had declared war on Nazi Germany, Oxford University had begun a change that would wholly transform it by the academic year’s end. Men ages twenty and twenty-one, save conscientious objectors and those deemed physically unfit, were being called up, and many others just a bit older volunteered to serve. Women had been able to matriculate and take degrees at the university since 1920, but members of the then all-male Congregation had voted to restrict the number of women to fewer than a quarter of the overall student population. Things changed rapidly after the onset of war. The proportion of women shot up, and, in many classes, there were newly as many women as men.

Among the women who experienced these novel conditions were several who did courses in philosophy and went on to strikingly successful intellectual careers. Elizabeth Anscombe, noted philosopher and Catholic moral thinker who would go on occupy the chair in philosophy that Ludwig Wittgenstein had held at Cambridge, started a course in Greats—roughly, classics and philosophy—in 1937, as did Jean Austin (neé Coutts), who would marry philosopher J. L. Austin and later have a long teaching career at Oxford. Iris Murdoch, admired and beloved philosopher and novelist, began to read Greats in 1938 at the same time as Mary Midgley (neé Scrutton), who became a prominent public philosopher and animal ethicist. A year later Philippa Foot (neé Bosanquet), distinguished moral philosopher, started to read the then relatively new course PPE—philosophy, politics and economics—and three years after that Mary Warnock (neé Wilson), subsequently a high-profile educator and public intellectual, went up to read Greats.

Several of these women would go on to make groundbreaking contributions to ethics…

Oxford philosophy in the early to mid 1930s had been in upheaval. The strains of Hegel-inspired idealism that had remained influential in Britain through the first decade of the twentieth century had been definitively displaced, in the years before World War I, by realist doctrines which claimed that knowledge must be of what is independent of the knower, and which were elaborated within ethics into forms of intuitionism. By the ’30s, these schools of thought were themselves threatened by new waves of enthusiasm for the themes of logical positivism developed by a group of philosophers and scientists, led by Moritz Schlick, familiarly known as the Vienna Circle. Cambridge University’s Susan Stebbing, the first woman to be appointed to a full professorship in philosophy in the UK, had already interacted professionally with Schlick in England and had championed tenets of logical positivism in essays and public lectures when, in 1933, Oxford don Gilbert Ryle recommended that his promising tutee Freddie Ayer make a trip to Vienna. Ayer obliged, and upon his return he wrote a brief manifesto, Language, Truth and Logic (1936), in defense of some of the Vienna Circle’s views. The book became a sensation, attracting attention and debate far beyond the halls of academic philosophy. Its bombshell contention was that only two kinds statements are meaningful: those that are true solely in virtue of the meanings of their constituent terms (such as “all bachelors are unmarried”), and those that can be verified through physical observation. The gesture seemed to consign to nonsense, at one fell swoop, the statements of metaphysics, theology, and ethics.

This turn to “verification” struck some as a fitting response to strains of European metaphysics that many people, rightly or wrongly, associated with fascist irrationalism and the gathering threat of war. But not everyone at Oxford was sympathetic. Although Ayer’s ideas weren’t universally admired, they were widely discussed, including by a group of philosophers led by Isaiah Berlin, who met regularly at All Souls College—among them, J. L. Austin, Stuart Hampshire, Donald MacKinnon, Donald MacNabb, Anthony Woozley, and Ayer himself. Oxford philosophy’s encounter with logical positivism would have a lasting impact and would substantially set the terms for subsequent research in many areas of philosophy—including, it would turn out, ethics and political theory…

A fascinating intellectual history of British moral philosophy in the second half of the 20th century: “Metaphysics and Morals,” Alice Crary in @BostonReview.

* Ludwig Wittgenstein

###

As we ponder precepts, we might recall that it was on this date in 1248 that the seat of the action described above, The University of Oxford, received its Royal Charter from King Henry III.   While it has no known date of foundation, there is evidence of teaching as far back as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation (after the University of Bologna).

The university operates the world’s oldest university museum, as well as the largest university press in the world, and the largest academic library system in Britain.  Oxford has educated and/or employed many notables, including 72 Nobel laureates, 4 Fields Medalists, and 6 Turing Award winners, 27 prime ministers of the United Kingdom, and many heads of state and government around the world. 

42749697282_7a6203784e_o

 source

“The real alchemy consists in being able to turn gold back again into something else; and that’s the secret that most of your friends have lost.”*…

16th century alchemical equipment, and 21st century reconception of Luria’s 16th century Sephirotic array by Naomi Teplow.

About a decade ago, the formidable Lawrence Weschler was a visiting scholar at the Getty Research Institute in Los Angeles, where he conceived a concept for an exhibit that, sadly, never materialized. Happily, he has shared the design in his wonderful newsletter, Wondercabinet

Lead into Gold:

Proposal for a little jewel-box exhibit

surveying the Age-Old Quest

To Wrest Something from Nothing,

from the Philosopher’s Stone

through Subprime Loans

The boutique-sized (four-room) show would be called “Lead into Gold” and would track the alchemical passion—from its prehistory in the memory palaces of late antiquity through the Middle Ages

(those elaborate mnemonic techniques whereby monks and clerks stored astonishing amounts of details in their minds by placing them in ever-expanding imaginary structures, forebears, as it were, to the physical wondercabinets of the later medieval period—a sampling of manuscripts depicting the technique would grace a sort of foyer to the exhibition),

into its high classic phase (the show’s first long room) with alchemy as pre-chemistry (with maguses actually trying, that is, to turn physical lead into physical gold, all the beakers and flasks and retorts, etc.) to one side, and astrology as pre-astronomy (the whole deliriously marvelous sixteenth-into-seventeenth centuries) to the other, and Isaac Newton serving as a key leitmotif figure through the entire show (though starting out here), recast no longer in his role as the first of the moderns so much as “the Last of the Sumerians” (as an astonished John Maynard Keynes dubbed him, upon stumbling on a cache of thousands of pages of his Cambridge forebear’s detailed alchemical notes, not just from his early years before the Principia, but from throughout his entire life!).

The show would then branch off in two directions, in a sort of Y configuration. To one side:

1) The Golden Path, which is to say the growing conviction among maguses and their progeny during the later early-modern period that the point was allegorical, an inducement to soul-work, in which one was called upon to try to refine the leaden parts of oneself into ever more perfect golden forms, hence Faustus and Prospero through Jung, with those magi Leibniz and Newton riffing off Kabbalistic meditations on Infinity and stumbling instead onto the infinitesimal as they invent the Calculus, in turn eventually opening out (by way of Blake) onto all those Sixties versions, the dawning of the Age of Aquarius, etc., which set the stage for the Whole Earth Catalog and all those kid-maguses working in their garages (developing both hardware and software: fashioning the Calculus into material reality) and presently the Web itself (latter day version of those original memory palaces from back in the show’s foyer, writ large);

while, branching off to the other side, we would have:

2) The Leaden Path, in which moneychangers and presently bankers decided to cut to the chase, for, after all, who needed lead and who needed gold and for god’s sake who needed a more perfect soul when you could simply turn any old crap into money (!)—thus, for example, the South Sea Bubble, in which Newton lost the equivalent of a million dollars (whereupon he declared that he could understand the transit of stars but not the madness of men), tulipomania, etc., and thence onward to Freud (rather than Jung) and his conception of “filthy lucre” and George Soros (with his book, The Alchemy of Finance), with the Calculus showing up again across ever more elaborate permutations, leading on through Ponzi and Gecko (by way of Ayn Rand and Alan “The Wizard” Greenspan) to the whole derivatives bubble/tumor, as adumbrated in part by my own main man, the money artist JSG Boggs, and then on past that to the purest mechanism ever conceived for generating fast money out of crap: meth labs (which deploy exactly but exactly the same equipment as the original alchemists, beakers and flasks and retorts, to accomplish the literal-leaden version of what they were after, the turning of filth into lucre).

And I appended a xerox of that napkin sketch:

Eminently worth reading– and enjoying–in full. “The age-old human quest to turn nothing into something.”

* Edith Wharton

###

As we appreciate the abiding attraction of alchemy, we might recall that it was on this date in 1933 that President Franklin D. Roosevelt signed the act creating the Tennessee Valley Authority. A feature of the New Deal, the TVA was created to provide navigation, flood control, electricity generation, fertilizer manufacturing, regional planning, and economic development to the Tennessee Valley, a region (all of Tennessee, portions of Alabama, Mississippi, and Kentucky, and small areas of Georgia, North Carolina, and Virginia) which was particularly hard hit by the Great Depression relative to the rest of the nation. While owned by the federal government, TVA receives no taxpayer funding and operates similar to a private for-profit company.

The TVA has been criticized for its use of eminent domain, which resulted in the displacement of over 125,000 Tennessee Valley residents for the agency’s infrastructure projects. But on balance the TVA has been documented as a success in its efforts to modernize the Tennessee Valley and helping to recruit new employment opportunities to the region.

FDR signing the TVA Act [source]

“Ultimately, it is the desire, not the desired, that we love”*…

Or is it? The web– and the world– are awash in talk of the Mimetic Theory of Desire (or Rivalry, as its creator, René Girard, would also have it). Stanford professor (and Philosophy Talk co-host) Joshua Landy weights in with a heavy word of caution…

Here are two readings of Shakespeare’s Hamlet. Which do you think we should be teaching in our schools and universities?

Reading 1. Hamlet is unhappy because he, like all of us, has no desires of his own, and therefore has no being, properly speaking. The best he can do is to find another person to emulate, since that’s the only way anyone ever develops the motivation to do anything. Shakespeare’s genius is to show us this life-changing truth.

Reading 2. Hamlet is unhappy because he, like all of us, is full of body thetans, harmful residue of the aliens brought to Earth by Xenu seventy-five million years ago and disintegrated using nuclear bombs inside volcanoes. Since it is still some time until the practice of auditing comes into being, Hamlet has no chance of becoming “clear”; it is no wonder that he displays such melancholy and aimlessness. Shakespeare’s genius is to show us this life-changing truth.

Whatever you make of the first, I’m rather hoping that you feel at least a bit uncomfortable with the second. If so, I have a follow-up question for you: what exactly is wrong with it? Why not rewrite the textbooks so as to make it our standard understanding of Shakespeare’s play? Surely you can’t fault the logic behind it: if humans have indeed been full of body thetans since they came into existence, and Hamlet is a representation of a human being, Hamlet must be full of body thetans. What is more, if everyone is still full of body thetans, then Shakespeare is doing his contemporaries a huge favor by telling them, and the new textbooks will be doing us a huge favor by telling the world. Your worry, presumably, is that this whole body thetan business is just not true. It’s an outlandish hypothesis, with nothing whatsoever to support it. And since, as Carl Sagan once said, “extraordinary claims require extraordinary evidence,” we would do better to leave it alone.

I think you see where I’m going with this. The fact is, of course, that the first reading is just as outlandish as the second. As I’m about to show (not that it should really need showing), human beings do have desires of their own. That doesn’t mean that all our desires are genuine; it’s always possible to be suckered into buying a new pair of boots, regardless of the fact that they are uglier and shoddier than our old ones, just because they are fashionable. What it means is that some of our desires are genuine. And having some genuine desires, and being able to act on them, is sufficient for the achievement of authenticity. For all we care, Hamlet’s inky cloak could be made by Calvin Klein, his feathered hat by Diane von Furstenberg; the point is that he also has motivations (to know things, to be autonomous, to expose guilt, to have his story told accurately) that come from within, and that those are the ones that count.

To my knowledge, no one in the academy actually reads Hamlet (or anything else) the second way. But plenty read works of literature the first way. René Girard, the founder of the approach, was rewarded for doing so with membership in the Académie française, France’s elite intellectual association. People loved his system so much that they established a Colloquium on Violence and Religion, hosted by the University of Innsbruck, complete with a journal under the ironically apt name Contagion. More recently, Peter Thiel, the co-founder of PayPal, loved it so much that he sank millions of dollars into Imitatio, an institute for the dissemination of Girardian thought. And to this day, you’ll find casual references to the idea everywhere, from people who seem to think it’s a truth, one established by René Girard. (Here’s a recent instance from the New York Times opinion pages: “as we have learned from René Girard, this is precisely how desires are born: I desire something by way of imitation, because someone else already has it.”) All of which leads to an inevitable question: what’s the difference between Girardianism and Scientology? Why has the former been more successful in the academy? Why is the madness of theory so, well, contagious?…

Are we really dependent on others for our desires? Does that mechanism inevitably lead to rivalry, scapegoating, and division? @profjoshlandy suggests not: “Deceit, Desire, and the Literature Professor: Why Girardians Exist,” in @StanfordArcade. Via @UriBram in @TheBrowser. Eminently worth reading in full.

* Friedrich Nietzsche (an inspiration to Girard)

###

As we tease apart theorizing, we might spare a thought for William Whewell; he died on this date in 1866. A scientist, Anglican priest, philosopher, theologian, and historian of science, he was Master of Trinity College, Cambridge.

At a time when specialization was increasing, Whewell was renown for the breadth of his work: he published the disciplines of mechanics, physics, geology, astronomy, and economics, while also finding the time to compose poetry, author a Bridgewater Treatise, translate the works of Goethe, and write sermons and theological tracts. In mathematics, Whewell introduced what is now called the Whewell equation, defining the shape of a curve without reference to an arbitrarily chosen coordinate system. He founded mathematical crystallography and developed a revision of  Friedrich Mohs’s classification of minerals. And he organized thousands of volunteers internationally to study ocean tides, in what is now considered one of the first citizen science projects.

But some argue that Whewell’s greatest gift to science was his wordsmithing: He created the words scientist and physicist by analogy with the word artist; they soon replaced the older term natural philosopher. He also named linguisticsconsiliencecatastrophismuniformitarianism, and astigmatism.

Other useful words were coined to help his friends: biometry for John Lubbock; Eocine, Miocene and Pliocene for Charles Lyell; and for Michael Faraday, electrode, anode, cathode, diamagnetic, paramagnetic, and ion (whence the sundry other particle names ending -ion).

source