(Roughly) Daily

Archive for May 2023

“Those who live by the sea can hardly form a single thought of which the sea would not be part”*…

If only. All of us on this interconnected planet are deeply beholden to our oceans; but all too few of us, all too infrequently, pay them heed. Surabhi Ranganathan explores one too-seldom considered dimension in which we need to address that deficit: the “Law of the Sea.” As she explains, the growing international competition for reclamation, navigation, cabling, and undersea resource rights, against the backdrop of climate change, demand a radically-revised approach…

I write this essay in an office in Singapore, where I have just learned an arresting fact. The legal historians Antony Anghie and Kevin Tan have informed me that in the course of my arrival, via Terminal 3 of Singapore’s Changi Airport, I must have crossed – on foot – the probable spot where, more than 400 years ago, the Dutch East India Company (VOC) Captain Jacob van Heemskerk captured the Santa Catarina, a Portuguese ship. This makes sense: in Martine van Ittersum’s rich description of the incident, she notes that it took place at the entrance of the Singapore Straits. Heemskerk, the story goes, made a wild dash to Johor from Tioman Island upon receiving news that two Portuguese carracks laden with spices, silks, and porcelain, would be moving through the Straits. Having missed the first, he awoke on the morning of February 25, 1603, to find the second, the Catarina, right before his eyes. He swiftly captured the ship just off Singapore’s eastern shoals. In the time since that event, projects of reclamation have increased Singapore’s total land area by 25 percent, and Changi airport occupies one such reclaimed part, sitting where the shoals used to be.

The Catarina’s capture occupies an important place in the history of international law. The incident was part of an imperial struggle between European states over access to trade with the East Indies. Such trade promised fabulous wealth: the goods recovered from this event alone sold for over three million guilders in the markets of Amsterdam, an amount that was roughly double the capital of the English East India Company. Portugal was outraged by the loss, while the VOC was keen to defend its actions. On retainer from the company, the jurist Hugo Grotius—then just in his early twenties!—wrote a brief that is now regarded as a foundational text,  Mare Liberum, or The Free Sea.  

Grotius argued that the sea was entirely unlike land. Land, being fixed, cultivable and, most importantly, exhausted by its use, could be regarded as divisible, subject to public and private ownership, and demarcated by national boundaries. The sea was fluid and constantly in movement; it was indivisible, unoccupiable, inexhaustible, indeed unalterable for better or worse via human activity. As such, it was irreducible to private ownership or state sovereignty. That being the case, it was Portugal that had acted wrongfully in claiming exclusive rights of maritime navigation and commerce with the Indies.

The Grotian imaginary of the sea persisted for centuries. The principle of the freedom of the seas came to define oceanic activities from navigation to fishing. Indeed, modern international law continues to express a principle of maritime freedom, though it is a far narrower form of freedom than Grotius initially claimed.

Today, international treaties, states, institutions, corporations, and courts all recognize that the ocean is divisible and, in parts even appropriable, in the same way as land. Oceanic resources are exhaustible and can also be enhanced by human endeavor: cultivation through new methods like aquaculture is increasingly seen as essential to assure the global supply of fish. In the decades since the Second World War, a dense network of legal rules on access, use-rights, and responsibilities have developed to regulate the crowding conglomerations of interests and territorial claims upon the oceans.

Moreover, international law has been increasingly called upon not only to articulate the ways land and sea resemble each other, but also to address the mutability of those very categories. Thanks to legal and technological innovations, what was once sea might become land: the reclamation projects that have accounted for the site of Changi Airport are but one example. In the other direction, rising sea levels and intensifying critical weather events can quickly turn what was once land into sea. Down in the deep, the binary between land and sea is confounded by formations which appear as neither fully one nor quite the other.

The shifting relation between land and sea reflects the scale of human impact on the environment. This unstable relation forces us to confront the consequences of climate change, as the fixed certainties — soil, resources, infrastructure – that have for so long governed our imagination of land begin to fall apart.  As a result, we must contend with new expectations of, and investments in, the sea…       

Down in the deep, the legal distinction between land and sea no longer holds– and that’s a problem: “The Law of the Sea,” from @SurabhiRanganat in @thedialmag.

* Hermann Broch

###

As we go deep, we might recall that it was on this date in 1911 that RMS Titanic was launched from the boatyard in Belfast in which it was built, the largest passenger ship of its day. A state-of-the-art steamship, it set sail from Southampton on its maiden voyage on march 10th of the following year, bound for New York City.  Four days later, after calls at Cherbourg in France and Queenstown (now Cobh) in Ireland, the “unsinkable” Titanic collided with the iceberg that sent it under in the North Atlantic, 375 miles south of Newfoundland.

When the location of the wreck of Titanic was discovered in 1985, there was fear that extant Admiralty law would allow for the “looting” of what its discoverer believed should be “a monument.” In an example that the Law of the Sea can in fact be revised, the RMS Titanic Maritime Memorial Act was passed in 1986. (After the Act’s passing, the Department of State proposed an agreement with the United Kingdom, Canada and France (as well as other interested countries) to enact the policies from the 1986 Act on an international scale… the U.K. ratified it briskly, but the U.S. didn’t get around to it until 2019. France and Canada are pending. In the meantime, the wreck of Titanic has been revisited on numerous occasions by explorers, scientists, filmmakers, tourists and salvagers, who have recovered thousands of items from the debris field for conservation, public display… and sale.

(For perspective on scale)

source

Written by (Roughly) Daily

May 31, 2023 at 1:00 am

“Charisma is not so much getting people to like you as getting people to like themselves when you’re around”*…

Donald Trump and Barak Obama at Trump’s inauguration (source)

Charisma: hard to define, but clear when one encounters it. Joe Zadeh looks at charisma’s history– both as a phenomenon and as a concept– and contemplates its future (spoiler alert– AI figures).

After recounting the story of Stephan George, a German poet and thought leader who was hugely consequential in Germany in the first half of the 20th century, he turns to pioneering sociologist Max Weber, who met George in 1910…

At the time, charisma was an obscure religious concept used mostly in the depths of Christian theology. It had featured almost 2,000 years earlier in the New Testament writings of Paul to describe figures like Jesus and Moses who’d been imbued with God’s power or grace. Paul had borrowed it from the Ancient Greek word “charis,” which more generally denoted someone blessed with the gift of grace. Weber thought charisma shouldn’t be restricted to the early days of Christianity, but rather was a concept that explained a far wider social phenomenon, and he would use it more than a thousand times in his writings. He saw charisma echoing throughout culture and politics, past and present, and especially loudly in the life of Stefan George…

Weber had died in 1920, before George truly reached the height of his powers (and before the wave of totalitarian dictatorships that would define much of the century), but he’d already seen enough to fatten his theory of charisma. At times of crisis, confusion and complexity, Weber thought, our faith in traditional and rational institutions collapses and we look for salvation and redemption in the irrational allure of certain individuals. These individuals break from the ordinary and challenge existing norms and values. Followers of charismatic figures come to view them as “extraordinary,” “superhuman” or even “supernatural” and thrust them to positions of power on a passionate wave of emotion. 

In Weber’s mind, this kind of charismatic power wasn’t just evidenced by accounts of history — of religions and societies formed around prophets, saints, shamans, war heroes, revolutionaries and radicals. It was also echoed in the very stories we tell ourselves — in the tales of mythical heroes like Achilles and Cú Chulainn. 

These charismatic explosions were usually short-lived and unstable — “every hour of its existence brings it nearer to this end,” wrote Weber — but the most potent ones could build worlds and leave behind a legacy of new traditions and values that then became enshrined in more traditional structures of power. In essence, Weber believed, all forms of power started and ended with charisma; it drove the volcanic eruptions of social upheaval. In this theory, he felt he’d uncovered “the creative revolutionary force” of history. 

Weber was not the first to think like this. Similar ideas had been floating around at least as far back as the mid-1700s, when the Scottish philosopher David Hume had written that in the battle between reason and passion, the latter would always win. And it murmured in the 1800s in Thomas Carlyle’s “Great Man Theory” and in Nietzsche’s idea of the “Übermensch.” But none would have quite the global impact of Weber, whose work on charisma would set it on a trajectory to leap the fence of religious studies and become one of the most overused yet least understood words in the English language.

A scientifically sound or generally agreed-upon definition of charisma remains elusive even after all these years of investigation. Across sociology, anthropology, psychology, political science, history and theater studies, academics have wrestled with how exactly to explain, refine and apply it, as well as identify where it is located: in the powerful traits of a leader or in the susceptible minds of a follower or perhaps somewhere between the two, like a magnetic field…

…Weber himself would disagree with the individualized modern understanding of charisma. “He was actually using it in a far more sophisticated way,” he said. “It wasn’t about the power of the individual — it was about the reflection of that power by the audience, about whether they receive it. He saw it as a process of interaction. And he was as fascinated by crowds as he was by individuals.” In Weber’s words: “What is alone important is how the [charismatic] individual is actually regarded by those subject to charismatic authority, by his ‘followers’ or ‘disciples.’ … It is recognition on the part of those subject to authority which is decisive for the validity of charisma.”

The Eurocentric version of how Weber conceptualized charisma is that he took it from Christianity and transformed it into a theory for understanding Western culture and politics. In truth, it was also founded on numerous non-Western spiritual concepts that he’d discovered via the anthropological works of his day. In one of the less-quoted paragraphs of his 1920 book “The Sociology of Religion,” Weber wrote that his nascent formulation of charisma was inspired by mana (Polynesian), maga (Zoroastrian, and from which we get our word magic) and orenda (Native American). “In this moment,” Wright wrote in a research paper exploring this particular passage, “we see our modern political vocabulary taking shape before our eyes.”

Native American beliefs were of particular interest to Weber. On his only visit to America in 1904, he turned down an invitation from Theodore Roosevelt to visit the White House and headed to the Oklahoma plains in search of what remained of Indigenous communities there. Orenda is an Iroquois term for a spiritual energy that flows through everything in varying degrees of potency. Like charisma, possessors of orenda are said to be able to channel it to exert their will. “A shaman,” wrote the Native American scholar J.N.B. Hewitt, “is one whose orenda is great.” But unlike the Western use of charisma, orenda was said to be accessible to everything, animate and inanimate, from humans to animals and trees to stones. Even the weather could be said to have orenda. “A brewing storm,” wrote Hewitt, is said to be “preparing its orenda.” 

This diffuse element of orenda — the idea that it could be imbued in anything at all — has prefigured a more recent evolution in the Western conceptualization of charisma: that it is more than human. Archaeologists have begun to apply it to the powerful and active social role that certain objects have played throughout history. In environmentalism, Jamie Lorimer of Oxford University has written that charismatic species like lions and elephants “dominate the mediascapes that frame popular sensibilities toward wildlife” and feature “disproportionately in the databases and designations that perform conservation.” 

Compelling explorations of nonhuman charisma have also come from research on modern technology. Human relationships with technology have always been implicitly spiritual. In the 18th century, clockmakers became a metaphor for God and clockwork for the universe. Airplanes were described as “winged gospels.” The original iPhone was heralded, both seriously and mockingly, as “the Jesus phone.” As each new popular technology paints its own vision of a better world, we seek in these objects a sort of redemption, salvation or transcendence. Some deliver miracles, some just appear to, and others fail catastrophically. 

Today, something we view as exciting, terrifying and revolutionary, and have endowed with the ability to know our deepest beliefs, prejudices and desires, is not a populist politician, an internet influencer or a religious leader. It’s an algorithm. 

These technologies now have the power to act in the world, to know things and to make things happen. In many instances, their impact is mundane: They arrange news feeds, suggest clothes to buy and calculate credit scores. But as we interact more and more with them on an increasingly intimate level, in the way we would ordinarily with other humans, we develop the capacity to form charismatic bonds. 

It’s now fairly colloquial for someone to remark that they “feel seen” by algorithms and chatbots. In a 2022 study of people who had formed deep and long-term friendships with the AI-powered program Replika, participants reported that they viewed it as “a part of themselves or as a mirror.” On apps like TikTok, more than any other social media platform, the user experience is almost entirely driven by an intimate relationship with the algorithm. Users are fed a stream of videos not from friends or chosen creators, but mostly from accounts they don’t follow and haven’t interacted with. The algorithm wants users to spend more time on the platform, and so through a series of computational procedures, it draws them down a rabbit hole built from mathematical inferences of their passions and desires. 

The inability to understand quite how sophisticated algorithms exert their will on us (largely because such information is intentionally clouded), while nonetheless perceiving their power enables them to become an authority in our lives. As the psychologist Donald McIntosh explained almost half a century ago, “The outstanding quality of charisma is its enormous power, resting on the intensity and strength of the forces which lie unconscious in every human psyche. … The ability to tap these forces lies behind everything that is creative and constructive in human action, but also behind the terrible destructiveness of which humans are capable. … In the social and political realm, there is no power to match that of the leader who is able to evoke and harness the unconscious resources of his followers.”

In an increasingly complex and divided society, in which partisanship has hindered the prospect of cooperation on everything from human rights to the climate crisis, the thirst for a charismatic leader or artificial intelligence that can move the masses in one direction is as seductive as it has ever been. But whether such a charismatic phenomenon would lead to good or bad, liberation or violence, salvation or destruction, is a conundrum that remains at the core of this two-faced phenomenon. “The false Messiah is as old as the hope for the true Messiah,” wrote Franz Rosenzweig. “He is the changing form of this changeless hope.”… 

How our culture, politics, and technology became infused with a mysterious social phenomenon that everyone can feel but nobody can explain: “The Secret History And Strange Future Of Charisma,” from @joe_zadeh in @NoemaMag. Eminently worth reading in full.

Robert Breault

###

As we muse on magnetism, we might recall that it was on this date in 1723 that Johann Sebastian Bach assumed the office of Thomaskantor (Musical Director of the Thomanerchor, now an internationally-known boys’ choir founded in Leipzig in 1212), presenting his new cantata, Die Elenden sollen essen, BWV 75— a complex work in two parts, of seven movements each, marks the beginning of his first annual cycle of cantatas— in the St. Nicholas Church.

Thomaskirche and it choir school, 1723 (source)

“You simply cannot invent any conspiracy theory so ridiculous and obviously satirical that some people somewhere don’t already believe it”*…

As Greg Miller explains, conspiracy theories seem to meet psychological needs and can be almost impossible to eradicate. But there does appear to be a remedy: keep them from taking root in the first place…

If conspiracy theories are as old as politics, they’re also — in the era of Donald Trump and QAnon — as current as the latest headlines. Earlier this month, the American democracy born of an eighteenth century conspiracy theory faced its most severe threat yet — from another conspiracy theory, that (all evidence to the contrary) the 2020 presidential election was rigged. Are conspiracy theories truly more prevalent and influential today, or does it just seem that way?

The research isn’t clear. Rosenblum and others see evidence that belief in conspiracy theories is increasing and taking dangerous new forms. Others disagree. But scholars generally do agree that conspiracy theories have always existed and always will. They tap into basic aspects of human cognition and psychology, which may help explain why they take hold so easily — and why they’re seemingly impossible to kill.

Once someone has fully bought into a conspiracy theory, “there’s very little research that actually shows you can come back from that,” says Sander van der Linden, a social psychologist at the University of Cambridge whose research focuses on ways to combat misinformation. “When it comes to conspiracy theories, prevention is better than cure.”

Talking a true believer out of their belief in a conspiracy can be nearly impossible. (The believer will assume you’re hopelessly naïve or, worse, that you’re part of the cover-up). Even when conspiracy theories have bold predictions that don’t come true, such as QAnon’s claim that Trump would win reelection, followers twist themselves in logical knots to cling to their core beliefs. “These beliefs are important to people, and letting them go means letting go of something important that has determined the way they see the world for some time,” says [Karen Douglas, a psychologist who studies conspiracy thinking at the University of Kent in the United Kingdom].

As a result, some researchers think that preventing conspiracy theories from taking hold in the first place is a better strategy than fact-checking and debunking them after they do — and they have been hard at work developing and testing such strategies. Van der Linden sees inoculation as a useful metaphor here. “I think one of the best solutions we have is to actually inject people with a weakened dose of the conspiracy…to help people build up mental or cognitive antibodies,” he says.

One way he and his colleagues have been trying to do that (no needles required) is by developing online games and apps. In a game called Bad News, for example, players assume the role of a fake news creator trying to attract followers and evolve from a social media nobody into the head of a fake-news empire…

The critical question — pushing the vaccine metaphor to its limits — is how to achieve herd immunity, the point at which enough of the population is immune so that conspiracy theories can’t go viral. It might be difficult to do that with games because they require people to take the time to engage, says Gordon Pennycook, a behavioral scientist at the University of Regina in Canada. So Pennycook has been working on interventions that he believes will be easier to scale up.

Even as researchers push to develop such measures, they acknowledge that eradicating bogus conspiracy theories may not be possible. Conspiracy theories flourished as far back as the Roman Empire, and they inspired an angry mob to storm the U.S. Capitol just last week. Specific theories may come and go, but the allure of conspiracy theories for people trying to make sense of events beyond their control seems more enduring. For better — and of late, very much for worse — they appear to be a permanent part of the human condition…

Eminently worth reading in full: “The enduring allure of conspiracies, ” from @dosmonos in @NiemanLab.

* Robert Anton Wilson

###

As we fumble with the fantastic, we might send prodigious birthday greeting to G.K. Chesterton; he was born on this date in 1874.  The author of 80 books, several hundred poems, over 200 short stories, 4000 essays, and several plays, he was a literary and social critic, historian, playwright, novelist, Catholic theologian and apologist, debater, and mystery writer. Chesterton was a columnist for the Daily News, the Illustrated London News, and his own paper, G. K.’s Weekly, and wrote articles for the Encyclopædia Britannica.  Chesterton created the priest-detective Father Brown, who appeared in a series of short stories, and had a huge influence on the development of the mystery genre; his best-known novel is probably The Man Who Was Thursday.

Chesterton’s faith, which he defended in print and speeches, brought him into conflict with the most famous atheist of the time, George Bernard Shaw, who said (on the death of his “friendly enemy”), “he was a man of colossal genius.”

The lunatic is the man who lives in a small world but thinks it is a large one; he is the man who lives in a tenth of the truth, and thinks it is the whole. The madman cannot conceive any cosmos outside a certain tale or conspiracy or vision.

G. K. Chesterton
George Bernard Shaw, Hilaire Belloc, and G. K. Chesterton

source

“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…

A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…

In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.

This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:

N14 + N14 ⇒ Mg24 + α + 17.7 MeV

The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?

Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.

Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.

… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.

[Ceglowski unpacks those assumptions…]

If you accept all these premises, what you get is disaster!

Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.

As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.

At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.

From there things get very sci-fi very quickly.

[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]

This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.

People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?

Is the idea of “superintelligence” just a memetic hazard?

When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.

The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.

[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]

The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.

In his book, Bostrom lists six things an AI would have to master to take over the world:

  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity

If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.

Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.

Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.

So what’s the answer? What’s the fix?

We need better scifi! And like so many things, we already have the technology…

[Ceglowski eaxplains– and demostrates– what he means…]

In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.

It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…

In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…

Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.

* John Rich

###

As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…

source

“Cogito, ergo sum”*…

Descartes and Princess Elizabeth of Bohemia

Rene Descartes (and here), who laid the foundation for modern rationalism and ignited the interest in epistemology that began to grow in the 17th century, been called the father of modern philosophy. Erik Hoel argues that he had very influential help…

Princess Elisabeth of Bohemia—the first person to fully understand the paradoxical nature of the mind-problem, a mathematician, the possible romantic interest of Descartes, and an eventual abbess—was born in 1618, and lived in exile with her family in the Netherlands, a political refuge after her father’s brief reign. Her father’s rule had ended after he lost what was called the “Battle of the White Mountain,” for which he would be known via the sobriquet “the winter king,” having been in power for merely a season.

Elisabeth was a great philosopher in her own right—whip-smart and engaged by the intellectually stimulating times, she maintained numerous correspondences throughout her life on all manner of subjects. For her learning, within her family she was known as “the Greek,” and this was in a set of siblings that included an eventual king, another brother who was a famous scientist in addition to being a co-founder of the Hudson’s Bay Company, another sister who was a talented artist, and a further sister who was the eventual patron of Leibniz. Mathematician, philosopher, theologian, and politician, Elisabeth was, in her day, an important hub in that republic of letters that would become science.

The princess and Descartes only met in person a few times, but maintained a long correspondence over the years, exchanging a total of fifty-eight letters that have survived (more may not have). The correspondence began in 1643, and would last, on and off, until Descartes’s surprising death in 1650 (he died of pneumonia after being forced to wake early in the morning and walk through a cold castle to tutor a different and far more demanding queen). In the princess and the philosopher’s letters, Descartes usually signed off with “Your very humble and very obedient servant” and Elisabeth with “Your very affectionate friend at your service.”

Their letters are vivid historical reading—the two’s repartee is funny and humble and courteous, intimate and yet respectful of the difference in their classes (Elisabeth’s far above Descartes’s); but they also dig deep into Descartes’s philosophy, with Elisabeth always probing at holes and Descartes always on the defensive to cover them…

Philosophical letters from a possible Renaissance romance: “The mind-body problem was discovered by a princess,” from @erikphoel.

For more, see: “Princess Elizabeth on the Mind-Body Problem” (source of the image above) and Elizabeth’s entry in the Stanford Encyclopedia of Philosophy.

And for the likely inspiration for Descartes’ most famous phrase– St. Teresa of Ávila– see “One of Descartes’ most famous ideas was first articulated by a woman.”

* Rene Descartes

###

As we duel with duality, we might spare a thought for Buddhadasa (born Phra Dharmakosācārya). A Thai ascetic-philosopher, he was an innovative reinterpreter of Buddhist doctrine and Thai folk beliefs who fostered a reformation in conventional religious perceptions in his home country and abroad.

Buddhadasa developed a personal view that those who have penetrated the essential nature of religions consider “all religions to be inwardly the same,” while those who have the highest understanding of dhamma feel “there is no religion.”

source