(Roughly) Daily

Posts Tagged ‘philosophy

“Everywhere is walking distance if you have the time”*…

Sara Walker and Lee Cronin suggest that time is not a backdrop, nor an illusion, nor an emergent phenomenon; rather, they suggest in an essay from The Santa Fe Institute, it has a physical size that can be measured in laboratories……

A timeless universe is hard to imagine, but not because time is a technically complex or philosophically elusive concept. There is a more structural reason: imagining timelessness requires time to pass. Even when you try to imagine its absence, you sense it moving as your thoughts shift, your heart pumps blood to your brain, and images, sounds and smells move around you. The thing that is time never seems to stop. You may even feel woven into its ever-moving fabric as you experience the Universe coming together and apart. But is that how time really works?

According to Albert Einstein, our experience of the past, present and future is nothing more than ‘a stubbornly persistent illusion’. According to Isaac Newton, time is nothing more than backdrop, outside of life. And according to the laws of thermodynamics, time is nothing more than entropy and heat. In the history of modern physics, there has never been a widely accepted theory in which a moving, directional sense of time is fundamental. Many of our most basic descriptions of nature – from the laws of movement to the properties of molecules and matter – seem to exist in a universe where time doesn’t really pass. However, recent research across a variety of fields suggests that the movement of time might be more important than most physicists had once assumed.

A new form of physics called assembly theory suggests that a moving, directional sense of time is real and fundamental. It suggests that the complex objects in our Universe that have been made by life, including microbes, computers and cities, do not exist outside of time: they are impossible without the movement of time. From this perspective, the passing of time is not only intrinsic to the evolution of life or our experience of the Universe. It is also the ever-moving material fabric of the Universe itself. Time is an object. It has a physical size, like space. And it can be measured at a molecular level in laboratories.

The unification of time and space radically changed the trajectory of physics in the 20th century. It opened new possibilities for how we think about reality. What could the unification of time and matter do in our century? What happens when time is an object?…

Find out at: “Time is an object,” by @Sara_Imari and @leecronin, from @sfiscience in @aeonmag.

Apposite: “The New Thermodynamic Understanding of Clocks.”

* Steven Wright

###

As we contemplate chronology, we might recall that it was on this date in 1947 that the Doomsday Clock appeared for the first time (as the fourth quadrant of a clock face with its hands at 7 minutes to midnight) as the background image on the cover of the June issue of the Bulletin of the Atomic Scientists. From then to the present, the Doomsday Clock image has been on the cover of the Bulletin, though the hands over the years have been shown moving forward or back to convey how close humanity is to catastrophic destruction (midnight).

The clock currently stands at 90 seconds to midnight.

source

[HBD, GC(S)]

Written by (Roughly) Daily

June 1, 2023 at 1:00 am

“You simply cannot invent any conspiracy theory so ridiculous and obviously satirical that some people somewhere don’t already believe it”*…

As Greg Miller explains, conspiracy theories seem to meet psychological needs and can be almost impossible to eradicate. But there does appear to be a remedy: keep them from taking root in the first place…

If conspiracy theories are as old as politics, they’re also — in the era of Donald Trump and QAnon — as current as the latest headlines. Earlier this month, the American democracy born of an eighteenth century conspiracy theory faced its most severe threat yet — from another conspiracy theory, that (all evidence to the contrary) the 2020 presidential election was rigged. Are conspiracy theories truly more prevalent and influential today, or does it just seem that way?

The research isn’t clear. Rosenblum and others see evidence that belief in conspiracy theories is increasing and taking dangerous new forms. Others disagree. But scholars generally do agree that conspiracy theories have always existed and always will. They tap into basic aspects of human cognition and psychology, which may help explain why they take hold so easily — and why they’re seemingly impossible to kill.

Once someone has fully bought into a conspiracy theory, “there’s very little research that actually shows you can come back from that,” says Sander van der Linden, a social psychologist at the University of Cambridge whose research focuses on ways to combat misinformation. “When it comes to conspiracy theories, prevention is better than cure.”

Talking a true believer out of their belief in a conspiracy can be nearly impossible. (The believer will assume you’re hopelessly naïve or, worse, that you’re part of the cover-up). Even when conspiracy theories have bold predictions that don’t come true, such as QAnon’s claim that Trump would win reelection, followers twist themselves in logical knots to cling to their core beliefs. “These beliefs are important to people, and letting them go means letting go of something important that has determined the way they see the world for some time,” says [Karen Douglas, a psychologist who studies conspiracy thinking at the University of Kent in the United Kingdom].

As a result, some researchers think that preventing conspiracy theories from taking hold in the first place is a better strategy than fact-checking and debunking them after they do — and they have been hard at work developing and testing such strategies. Van der Linden sees inoculation as a useful metaphor here. “I think one of the best solutions we have is to actually inject people with a weakened dose of the conspiracy…to help people build up mental or cognitive antibodies,” he says.

One way he and his colleagues have been trying to do that (no needles required) is by developing online games and apps. In a game called Bad News, for example, players assume the role of a fake news creator trying to attract followers and evolve from a social media nobody into the head of a fake-news empire…

The critical question — pushing the vaccine metaphor to its limits — is how to achieve herd immunity, the point at which enough of the population is immune so that conspiracy theories can’t go viral. It might be difficult to do that with games because they require people to take the time to engage, says Gordon Pennycook, a behavioral scientist at the University of Regina in Canada. So Pennycook has been working on interventions that he believes will be easier to scale up.

Even as researchers push to develop such measures, they acknowledge that eradicating bogus conspiracy theories may not be possible. Conspiracy theories flourished as far back as the Roman Empire, and they inspired an angry mob to storm the U.S. Capitol just last week. Specific theories may come and go, but the allure of conspiracy theories for people trying to make sense of events beyond their control seems more enduring. For better — and of late, very much for worse — they appear to be a permanent part of the human condition…

Eminently worth reading in full: “The enduring allure of conspiracies, ” from @dosmonos in @NiemanLab.

* Robert Anton Wilson

###

As we fumble with the fantastic, we might send prodigious birthday greeting to G.K. Chesterton; he was born on this date in 1874.  The author of 80 books, several hundred poems, over 200 short stories, 4000 essays, and several plays, he was a literary and social critic, historian, playwright, novelist, Catholic theologian and apologist, debater, and mystery writer. Chesterton was a columnist for the Daily News, the Illustrated London News, and his own paper, G. K.’s Weekly, and wrote articles for the Encyclopædia Britannica.  Chesterton created the priest-detective Father Brown, who appeared in a series of short stories, and had a huge influence on the development of the mystery genre; his best-known novel is probably The Man Who Was Thursday.

Chesterton’s faith, which he defended in print and speeches, brought him into conflict with the most famous atheist of the time, George Bernard Shaw, who said (on the death of his “friendly enemy”), “he was a man of colossal genius.”

The lunatic is the man who lives in a small world but thinks it is a large one; he is the man who lives in a tenth of the truth, and thinks it is the whole. The madman cannot conceive any cosmos outside a certain tale or conspiracy or vision.

G. K. Chesterton
George Bernard Shaw, Hilaire Belloc, and G. K. Chesterton

source

“If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera”*…

A talk from Maciej Cegłowski that provides helpful context for thinking about A.I…

In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.

This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:

N14 + N14 ⇒ Mg24 + α + 17.7 MeV

The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth’s history. Were we throwing a match into a bunch of dry leaves?

Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we’re all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.

Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.

… the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.

[Ceglowski unpacks those assumptions…]

If you accept all these premises, what you get is disaster!

Because at some point, as computers get faster, and we program them to be more intelligent, there’s going to be a runaway effect like an explosion.

As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it’s not going to stop until it hits a natural limit that might be very many times greater than human intelligence.

At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.

From there things get very sci-fi very quickly.

[Ceglowski unspools a scenario in whihc Bostrom’s worst nightmare comes true…]

This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.

People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?

Is the idea of “superintelligence” just a memetic hazard?

When you’re evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it’s coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you’re dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that’s the inside view talking.

The outside view doesn’t care about content, it sees the form and the context, and it doesn’t look good.

[Ceglowski then engages the question of AI risk from both of those perspectives; he comes down on the side of the “outside”…]

The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.

In his book, Bostrom lists six things an AI would have to master to take over the world:

  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity

If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.

Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.

Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don’t like to be manipulated. You can’t tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

I’ve even seen people in the so-called rationalist community refer to people who they don’t think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

So I work in an industry where the self-professed rationalists are the craziest ones of all. It’s getting me down… Really it’s a distorted image of themselves that they’re reacting to. There’s a feedback loop between how intelligent people imagine a God-like intelligence would behave, and how they choose to behave themselves.

So what’s the answer? What’s the fix?

We need better scifi! And like so many things, we already have the technology…

[Ceglowski eaxplains– and demostrates– what he means…]

In the near future, the kind of AI and machine learning we have to face is much different than the phantasmagorical AI in Bostrom’s book, and poses its own serious problems.

It’s like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

And of course there’s the question of how AI and machine learning affect power relationships. We’ve watched surveillance become a de facto part of our lives, in an unexpected way. We never thought it would look quite like this.

So we’ve created a very powerful system of social control, and unfortunately put it in the hands of people who run it are distracted by a crazy idea.

What I hope I’ve done today is shown you the dangers of being too smart. Hopefully you’ll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people…

In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world…

Eminently worth reading in full: “Superintelligence- the idea that eats smart people,” from @baconmeteor.

* John Rich

###

As we find balance, we might recall that it was on thsi date in 1936 that Alan Turing‘s paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in which he unpacked the concept of what we now call the Turing Machine, was received by the London Mathematical Society, which published it several months later. It was, as (Roughly) Daily reported a few days ago, the start of all of this…

source

“Cogito, ergo sum”*…

Descartes and Princess Elizabeth of Bohemia

Rene Descartes (and here), who laid the foundation for modern rationalism and ignited the interest in epistemology that began to grow in the 17th century, been called the father of modern philosophy. Erik Hoel argues that he had very influential help…

Princess Elisabeth of Bohemia—the first person to fully understand the paradoxical nature of the mind-problem, a mathematician, the possible romantic interest of Descartes, and an eventual abbess—was born in 1618, and lived in exile with her family in the Netherlands, a political refuge after her father’s brief reign. Her father’s rule had ended after he lost what was called the “Battle of the White Mountain,” for which he would be known via the sobriquet “the winter king,” having been in power for merely a season.

Elisabeth was a great philosopher in her own right—whip-smart and engaged by the intellectually stimulating times, she maintained numerous correspondences throughout her life on all manner of subjects. For her learning, within her family she was known as “the Greek,” and this was in a set of siblings that included an eventual king, another brother who was a famous scientist in addition to being a co-founder of the Hudson’s Bay Company, another sister who was a talented artist, and a further sister who was the eventual patron of Leibniz. Mathematician, philosopher, theologian, and politician, Elisabeth was, in her day, an important hub in that republic of letters that would become science.

The princess and Descartes only met in person a few times, but maintained a long correspondence over the years, exchanging a total of fifty-eight letters that have survived (more may not have). The correspondence began in 1643, and would last, on and off, until Descartes’s surprising death in 1650 (he died of pneumonia after being forced to wake early in the morning and walk through a cold castle to tutor a different and far more demanding queen). In the princess and the philosopher’s letters, Descartes usually signed off with “Your very humble and very obedient servant” and Elisabeth with “Your very affectionate friend at your service.”

Their letters are vivid historical reading—the two’s repartee is funny and humble and courteous, intimate and yet respectful of the difference in their classes (Elisabeth’s far above Descartes’s); but they also dig deep into Descartes’s philosophy, with Elisabeth always probing at holes and Descartes always on the defensive to cover them…

Philosophical letters from a possible Renaissance romance: “The mind-body problem was discovered by a princess,” from @erikphoel.

For more, see: “Princess Elizabeth on the Mind-Body Problem” (source of the image above) and Elizabeth’s entry in the Stanford Encyclopedia of Philosophy.

And for the likely inspiration for Descartes’ most famous phrase– St. Teresa of Ávila– see “One of Descartes’ most famous ideas was first articulated by a woman.”

* Rene Descartes

###

As we duel with duality, we might spare a thought for Buddhadasa (born Phra Dharmakosācārya). A Thai ascetic-philosopher, he was an innovative reinterpreter of Buddhist doctrine and Thai folk beliefs who fostered a reformation in conventional religious perceptions in his home country and abroad.

Buddhadasa developed a personal view that those who have penetrated the essential nature of religions consider “all religions to be inwardly the same,” while those who have the highest understanding of dhamma feel “there is no religion.”

source

“No problem can be solved from the same level of consciousness that created it”*…

Annaka Harris on the difficulty in understanding consciousness…

The central challenge to a science of consciousness is that we can never acquire direct evidence of consciousness apart from our own experience. When we look at all the organisms (or collections of matter) in the universe and ask ourselves, “Which of these collections of matter contain conscious experiences?” in the broadest sense, the answer has to be “some” or “all”—the only thing we have direct evidence to support is that the answer isn’t “none,” as we know that at least our own conscious experiences exist.

Until we attain a significantly more advanced understanding of the brain, and of many other systems in nature for that matter, we’re forced to begin with one of two assumptions: either consciousness arises at some point in the physical world, or it is a fundamental part of the physical world (some, or all). And the sciences have thus far led with the assumption that the answer is “some” (and so have I, for most of my career) for understandable reasons. But I would argue that the grounds for this starting assumption have become weaker as we learn more about the brain and the role consciousness plays in behavior.

The problem is that what we deem to be conscious processes in nature is based solely on reportability. And at the very least, the work with split-brain and locked-in patients should have radically shifted our reliance on reportability at this point…

The realization that all of our scientific investigations of consciousness are unwittingly rooted in a blind assumption led me to pose two questions that I think are essential for a science of consciousness to keep asking:

  1. Can we find conclusive evidence of consciousness from outside a system?
  2. Is consciousness causal? (Is it doing something? Is it driving any behavior?)

The truth is that we have less and less reason to respond “yes” to either question with any confidence.And if the answer to these questions is in fact “no,” which is entirely possible, we’ll be forced to reconsider our jumping off point. Personally I’m still agnostic, putting the chances that consciousness is fundamental vs. emergent at more or less 50/50. But after focusing on this topic for more than twenty years, I’m beginning to think that assuming consciousness is fundamental is actually a slightly more coherent starting place…

The Strong Assumption,” from @annakaharris.

See also: “How Do We Think Beyond Our Own Existence?“, from @annehelen.

* Albert Einstein

###

As we noodle on knowing, we might recall that it was on this date in 1987 that a patent (U.S. Patent No. 4,666,425) was awarded to Chet Fleming for a “Device for Perfusing an Animal Head”– a device for keeping a severed head alive.

That device, described as a “cabinet,” used a series of tubes to accomplish what a body does for most heads that are not “discorped”—that is, removed from their bodies. In the patent application, Fleming describes a series of tubes that would circulate blood and nutrients through the head and take deoxygenated blood away, essentially performing the duties of a living thing’s circulatory system. Fleming also suggested that the device might also be used for grimmer purposes.  

“If desired, waste products and other metabolites may be removed from the blood, and nutrients, therapeutic or experimental drugs, anti-coagulants and other substances may be added to the blood,” the patent reads.

Although obviously designed for research purposes, the patent does acknowledge that “it is possible that after this invention has been thoroughly tested on research animals, it might also be used on humans suffering from various terminal illnesses.”

Fleming, a trained lawyer who had the reputation of being an eccentric, wasn’t exactly joking, but he was worried that somebody would start doing this research. The patent was a “prophetic patent”—that is, a patent for something which has never been built and may never be built. It was likely intended to prevent others from trying to keep severed heads alive using that technology…

Smithsonian Magazine
An illustration from the patent application (source)

Written by (Roughly) Daily

May 19, 2023 at 1:00 am

%d bloggers like this: