(Roughly) Daily

Posts Tagged ‘consciousness

“Consciousness was upon him before he could get out of the way”*…

Some scientists, when looking at the ladder of nature, find no clear line between mind and no-mind…

Last year, the cover of New Scientist ran the headline, “Is the Universe Conscious?” Mathematician and physicist Johannes Kleiner, at the Munich Center for Mathematical Philosophy in Germany, told author Michael Brooks that a mathematically precise definition of consciousness could mean that the cosmos is suffused with subjective experience. “This could be the beginning of a scientific revolution,” Kleiner said, referring to research he and others have been conducting. 

Kleiner and his colleagues are focused on the Integrated Information Theory of consciousness, one of the more prominent theories of consciousness today. As Kleiner notes, IIT (as the theory is known) is thoroughly panpsychist because all integrated information has at least one bit of consciousness.

You might see the rise of panpsychism as part of a Copernican trend—the idea that we’re not special. The Earth is not the center of the universe. Humans are not a treasured creation, or even the pinnacle of evolution. So why should we think that creatures with brains, like us, are the sole bearers of consciousness? In fact, panpsychism has been around for thousands of years as one of various solutions to the mind-body problem. David Skrbina’s 2007 book, Panpsychism in the West, provides an excellent history of this intellectual tradition.

While there are many versions of panpsychism, the version I find appealing is known as constitutive panpsychism. It states, to put it simply, that all matter has some associated mind or consciousness, and vice versa. Where there is mind there is matter and where there is matter there is mind. They go together. As modern panpsychists like Alfred North Whitehead, David Ray Griffin, Galen Strawson, and others have argued, all matter has some capacity for feeling, albeit highly rudimentary feeling in most configurations of matter. 

While inanimate matter doesn’t evolve like animate matter, inanimate matter does behave. It does things. It responds to forces. Electrons move in certain ways that differ under different experimental conditions. These types of behaviors have prompted respected physicists to suggest that electrons may have some type of extremely rudimentary mind. For example the late Freeman Dyson, the well-known American physicist, stated in his 1979 book, Disturbing the Universe, that “the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call ‘chance’ when made by electrons.” Quantum chance is better framed as quantum choice—choice, not chance, at every level of nature. David Bohm, another well-known American physicist, argued similarly: “The ability of form to be active is the most characteristic feature of mind, and we have something that is mind-like already with the electron.”

Many biologists and philosophers have recognized that there is no hard line between animate and inanimate. J.B.S. Haldane, the eminent British biologist, supported the view that there is no clear demarcation line between what is alive and what is not: “We do not find obvious evidence of life or mind in so-called inert matter…; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.”…

Electrons May Very Well Be Conscious“: Tam Hunt (@TamHunt) explains.

* Kingsley Amis

###

As we challenge (chauvinistic?) conventions, we might spare a thought for a man who was no great respecter of consciousness, B. F. Skinner; he died on this date in 1990. A psychologist, he was the pioneer and champion of what he called “radical behaviorism,” the assumption that behavior is a consequence of environmental histories of “reinforcement” (reactions to positive and negative stimuli):

What is felt or introspectively observed is not some nonphysical world of consciousness, mind, or mental life but the observer’s own body. This does not mean, as I shall show later, that introspection is a kind of psychological research, nor does it mean (and this is the heart of the argument) that what are felt or introspectively observed are the causes of the behavior. An organism behaves as it does because of its current structure, but most of this is out of reach of introspection.

About Behaviorism

Building on the work of Ivan Pavlov and John B. Watson, Skinner used operant conditioning to strengthen behavior, considering the rate of response to be the most effective measure of response strength. To study operant conditioning, he invented the operant conditioning chamber (aka the Skinner box).

C.F. also: Thomas Pynchon’s Gravity’s Rainbow.

source

“To sleep: perchance to dream: ay, there’s the rub”*…

I’m not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor’s Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can’t remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we’ve got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it’s unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He’s a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: “overfitting” is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a “gaydar model” that “can predict sexual orientation from faces.”

That’s overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that’s what Hoel thinks are brains are doing while we sleep: injecting noisy “training data” into our conceptions of the universe so we aren’t led astray by overgeneralization.

Overfitting is a real problem for people (another word for “overfitting” is “prejudice”)…

Sleeping, dreaming, and the importance of a nightly dose of irrationality– Corey Doctorow (@doctorow) explains: “Dreaming and overfitting,” from his ever-illuminating newsletter, Pluralistic. Eminently worthy of reading in full.

(Image above: Gontzal García del CañoCC BY-NC-SA, modified)

* Shakespeare, Hamlet

###

As we nod off, we might send fully-oxygenated birthday greetings to Corneille Jean François Heymans; he was born on this date in 1892. A physiologist, he won the Nobel Prize for Physiology or Medicine in 1938 for showing how blood pressure and the oxygen content of the blood are measured by the body and transmitted to the brain via the nerves and not by the blood itself, as had previously been believed.

source

“A year spent in artificial intelligence is enough to make one believe in God”*…

A scan of the workings of an automaton of a friar, c1550. Possibly circle of Juanelo Turriano (c1500-85), probably Spanish.

The wooden monk, a little over two feet tall, ambles in a circle. Periodically, he raises a gripped cross and rosary towards his lips and his jaw drops like a marionette’s, affixing a kiss to the crucifix. Throughout his supplications, those same lips seem to mumble, as if he’s quietly uttering penitential prayers, and occasionally the tiny monk will raise his empty fist to his torso as he beats his breast. His head is finely detailed, a tawny chestnut colour with a regal Roman nose and dark hooded eyes, his pate scraped clean of even a tonsure. For almost five centuries, the carved clergyman has made his rounds, wound up by an ingenious internal mechanism hidden underneath his carved Franciscan robes, a monastic robot making his clockwork prayers.

Today his home is the Smithsonian National Museum of American History in Washington, DC, but before that he resided in that distinctly un-Catholic city of Geneva. His origins are more mysterious, though similar divine automata have been attributed to Juanelo Turriano, the 16th-century Italian engineer and royal clockmaker to the Habsburgs. Following Philip II’s son’s recovery from an illness, the reverential king supposedly commissioned Turriano to answer God’s miracle with a miracle of his own. Scion of the Habsburgs’ massive fortune of Aztec and Incan gold, hammer against the Protestant English and patron of the Spanish Inquisition, Philip II was every inch a Catholic zealot whom the British writer and philosopher G K Chesterton described as having a face ‘as a fungus of a leprous white and grey’, overseeing his empire in rooms where ‘walls are hung with velvet that is black and soft as sin’. It’s a description that evokes similarly uncanny feelings for any who should view Turriano’s monk, for there is one inviolate rule about the robot: he is creepy.

Elizabeth King, an American sculptor and historian, notes that an ‘uncanny presence separates it immediately from later automata: it is not charming, it is not a toy … it engages even the 20th-century viewer in a complicated and urgent way.’ The late Spanish engineer José A García-Diego is even more unsparing: the device, he wrote, is ‘considerably unpleasant’. One reason for his unsettling quality is that the monk’s purpose isn’t to provide simulacra of prayer, but to actually pray. Turriano’s device doesn’t serve to imitate supplication, he is supplicating; the mechanism isn’t depicting penitence, the machine performs it…

The writer Jonathan Merritt has argued in The Atlantic that rapidly escalating technological change has theological implications far beyond the political, social and ethical questions that Pope Francis raises, claiming that the development of self-aware computers would have implications for our definition of the soul, our beliefs about sin and redemption, our ideas about free will and providence. ‘If Christians accept that all creation is intended to glorify God,’ Merritt asked, ‘how would AI do such a thing? Would AI attend church, sing hymns, care for the poor? Would it pray?’ Of course, to the last question we already have an answer: AI would pray, because as Turriano’s example shows, it already has. Pope Francis also anticipated this in his November prayers, saying of AI ‘may it “be human”.’

While nobody believes that consciousness resides within the wooden head of a toy like Turriano’s, no matter how immaculately constructed, his disquieting example serves to illustrate what it might mean for an artificial intelligence in the future to be able to orient itself towards the divine. How different traditions might respond to this is difficult to anticipate. For Christians invested in the concept of an eternal human soul, a synthetic spirit might be a contradiction. Buddhist and Hindu believers, whose traditions are more apt to see the individual soul as a smaller part of a larger system, might be more amenable to the idea of spiritual machines. That’s the language that the futurist Ray Kurzweil used in calling our upcoming epoch the ‘age of spiritual machines’; perhaps it’s just as appropriate to think of it as the ‘Age of Turriano’, since these issues have long been simmering in the theological background, only waiting to boil over in the coming decades.

If an artificial intelligence – a computer, a robot, an android – is capable of complex thought, of reason, of emotion, then in what sense can it be said to have a soul? How does traditional religion react to a constructed person, at one remove from divine origins, and how are we to reconcile its role in the metaphysical order? Can we speak of salvation and damnation for digital beings? And is there any way in which we can evangelise robots or convert computers? Even for steadfast secularists and materialists, for whom those questions make no philosophical sense for humans, much less computers, that this will become a theological flashpoint for believers is something to anticipate, as it will doubtlessly have massive social, cultural and political ramifications.

This is no scholastic issue of how many angels can dance on a silicon chip, since it seems inevitable that computer scientists will soon be able to develop an artificial intelligence that easily passes the Turing test, that surpasses the understanding of those who’ve programmed it. In an article for CNBC entitled ‘Computers Will Be Like Humans By 2029’ (2014), the journalist Cadie Thompson quotes Kurzweil, who confidently (if controversially) contends that ‘computers will be at human levels, such as you can have a human relationship with them, 15 years from now.’ With less than a decade left to go, Kurzweil explains that he’s ‘talking about emotional intelligence. The ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy, that is the cutting edge of human intelligence, that is not a sideshow.’

Often grouped with other transhumanists who optimistically predict a coming millennium of digital transcendence, Kurzweil is a believer in what’s often called the ‘Singularity’, the moment at which humanity’s collective computing capabilities supersede our ability to understand the machines that we’ve created, and presumably some sort of artificial consciousness develops. While bracketing out the details, let’s assume that Kurzweil is broadly correct that, at some point in this century, an AI will develop that outstrips all past digital intelligences. If it’s true that automata can then be as funny, romantic, loving and sexy as the best of us, it could also be assumed that they’d be capable of piety, reverence and faith. When it’s possible to make not just a wind-up clock monk, but a computer that’s actually capable of prayer, how then will faith respond?..

Can a robot pray? Does an AI have a soul? Advances in automata raise theological debates that will shape the secular world; from Ed Simon (@WithEdSimon): “Machine in the ghost.” Do read the piece in full.

Then, for a different (but in the end, not altogether contradictory) view: “The Thoughts The Civilized Keep.”

And for another (related) angle: “Is it OK to torture a computer program?

For more on the work of sculptor and historian Elizabeth King on the Smithsonian automaton friar, please see her articles here and here, and her forthcoming book, Mysticism and Machinery.

Alan Perlis (first recipient of the Turing Award)

###

As we enlarge the tent, we might send revelatory birthday greetings to Albert Hofmann; he was born on this date in 1906.  As a young chemist at Sandoz in Switzerland, Hofmann was searching for a respiratory and circulatory stimulant when he fabricated lysergic acid diethylamide (LSD); handling it, he absorbed a bit through his fingertips and realized that the compound had psychoactive effects.  Three days later, on April 19, 1943– a day now known as “Bicycle Day”– Hofmann intentionally ingested 250 micrograms of LSD then rode home on a bike, a journey that became, pun intended, the first intentional acid trip.  Hofmann was also the first person to isolate, synthesize, and name the principal psychedelic mushroom compounds psilocybin and psilocin.

 source

“Life is really simple, but we insist on making it complicated”*…

One of the dominant themes of the last few years is that nothing makes sense. Donald Trump is president, QAnon has mainstreamed fringe conspiracy theories, and hundreds of thousands are dead from a pandemic and climate change while many Americans do not believe that the pandemic or climate change are deadly. It’s incomprehensible.

I am here to tell you the the reason that so much of the world seems incomprehensible is that it is incomprehensible. From social media to the global economy to supply chains, our lives rest precariously on systems that have become so complex, and we have yielded so much of it to technologies and autonomous actors that no one totally comprehends it all.

In other words: No one’s driving. And if we hope to retake the wheel, we’re going to have to understand, intimately, all of the ways we’ve lost control…

The internet might be the system that we interact with in the most direct and intimate ways, but most of us have little comprehension of what lies behind our finger-smudged touchscreens, truly understood by few. Made up of data centers, internet exchanges, huge corporations, tiny startups, investors, social media platforms, datasets, adtech companies, and billions of users and their connected devices, it’s a vast network dedicated to mining, creating, and moving data on scales we can’t comprehend. YouTube users upload more than 500 hours of video every minute — which works out as 82.2 yearsof video uploaded to YouTube every day. As of June 30, 2020, there are over 2.7 billion monthly active Facebook users, with 1.79 billion people on average logging on daily. Each day, 500 million tweets are sent— or 6,000 tweets every second, with a day’s worth of tweets filling a 10-million-page book. Every day, 65 billion messages are sent on WhatsApp. By 2025, it’s estimated that 463 million terabytes of data will be created each day — the equivalent of 212,765,957 DVDs…

What we’ve ended up with is a civilization built on the constant flow of physical goods, capital, and data, and the networks we’ve built to manage those flows in the most efficient ways have become so vast and complex that they’re now beyond the scale of any single (and, arguably, any group or team of) human understanding them. It’s tempting to think of these networks as huge organisms, with tentacles spanning the globe that touch everything and interlink with one another, but I’m not sure the metaphor is apt. An organism suggests some form of centralized intelligence, a nervous system with a brain at its center, processing data through feedback loops and making decisions. But the reality with these networks is much closer to the concept of distributed intelligence or distributed knowledge, where many different agents with limited information beyond their immediate environment interact in ways that lead to decision-making, often without them even knowing that’s what they’re doing…

Ceding control to vast unaccountable networks not only risks those networks going off the rails, it also threatens democracy itself. If we are struggling to understand or influence anything more than very small parts of them, this is also increasingly true for politicians and world leaders. Like the captain of the container ship, politicians and voters have less and less control over how any of these networks run. Instead they find themselves merely managing very small parts of them — they certainly don’t seem to be able to make drastic changes to those networks (which are mainly owned by private corporate industries anyway) even though they have a very direct impact on their nations’ economies, policies, and populations. To paraphrase the filmmaker Adam Curtis, instead of electing visionary leaders, we are in fact just voting for middle managers in a complex, global system that nobody fully controls.

The result of this feels increasingly like a democratic vacuum. We live in an era where voters have record levels of distrust for politicians, partly because they can feel this disconnect — they see from everyday reality that, despite their claims, politicians can’t effect change. Not really. They might not understand why, exactly, but there’s this increasing sense that leaders have lost the ability to make fundamental changes to our economic and social realities. The result is a large body of mainstream voters that wants to burn down the status quo. They want change, but don’t see politicians being able to deliver it. It feels like they’re trapped in a car accelerating at full throttle, but no one is driving.

They may not be able to do much about it, but there are mainstream politicians and elected leaders who see this vacuum for what it is — and see how it provides them with a political opportunity. Figures like Donald Trump and Boris Johnson certainly don’t believe in patching up the failures of this system — if anything, they believe in accelerating the process, deregulating, handing more power to the networks. No, for them this is a political vacuum that can be filled with blame. With finger-pointing and scapegoating. It is an opportunity to make themselves look powerful by pandering to fears, by evoking nationalism, racism, and fascism.

Donald Trump has still not conceded the 2020 election despite Joe Biden’s clear victory, and is leaning in part on the fact that the United States has a complex and sometimes opaque voting system that most of the public doesn’t understand to spread conspiracy theories about glitchy or malfeasant voting machines switching or deleting millions of votes. It’s perhaps no coincidence that some of the highest-profile figures on the right — like ex-Trump-adviser Steve Bannon or Brexit Party leader Nigel Farage — have backgrounds in the financial industry. These are political players who have seen how complicated things have become and can sense the gap in public comprehension but want to fill it with chaos and conspiracies rather than explanations…

As Tim Maughan (@TimMaughan) explains, vast systems, from automated supply chains to high-frequency trading, now undergird our daily lives — and we’re losing control of all of them: “The Modern World Has Finally Become Too Complex for Any of Us to Understand” (the first of a series of monthly columns that will “locate ways that we can try to increase our knowledge of the seemingly unknowable, as well as find strategies to counter the powerlessness and anxiety the system produces”).

* Confucius

###

As we contemplate complexity, we might send emergent birthday greetings to Per Bak; he was born on this date in 1948. A theoretical physicist, he is credited with developing the concept (and coining the name) of “self-organized criticality,” an explanation of how very complex phenomena (like consciousness) emerge from the interaction of simple components.

source

Written by (Roughly) Daily

December 8, 2020 at 1:01 am

“Simulation is the situation created by any system of signs when it becomes sophisticated enough, autonomous enough, to abolish its own referent and to replace it with itself”*…

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer…

Learn why gauging whether or not we dwell inside someone else’s computer may come down to advanced AI research—or measurements at the frontiers of cosmology: “Do We Live in a Simulation? Chances Are about 50–50.”

* Jean Baudrillard (who was describing the ways in which the significations and symbolism of culture and media are involved in constructing an understanding of shared existence… which may or may not, itself, be a simulation)

###

As we play the odds, we might send dark birthday greetings to Friedrich Wilhelm Nietzsche; he was born on this date in 1844. A philosopher, cultural critic, composer, poet, and philologist, he and his work have had a profound influence on modern intellectual history.

Nietzsche became the youngest person ever to hold the Chair of Classical Philology at the University of Basel in 1869 at the age of 24, but resigned in 1879 due to health problems that plagued him most of his life. He completed much of his core writing in the following decade, before suffering a complete mental breakdown in 1889, after which he lived in care until his death in 1900.

Nietzsche’s writing spanned philosophical polemics, poetry, cultural criticism, and fiction, all the while displaying a fondness for aphorism and irony. He’s best remembered as a philosopher, for work that included his radical critique of truth in favor of perspectivism; for his genealogical critique of religion and Christian morality (and his related theory of master–slave morality); for is aesthetic affirmation of existence in response to his famous observation of the “death of God” and the profound crisis of nihilism; for his notion of the Apollonian and Dionysian; and for his characterization of the human subject as the expression of competing wills, collectively understood as the will to power. Nietzsche also developed influential concepts such as the Übermensch and the doctrine of eternal return.

After his death, his sister Elisabeth became the curator and editor of Nietzsche’s manuscripts. She edited his unpublished writings to fit her German nationalist beliefs– often contradicting or obfuscating Nietzsche’s stated opinions, which were explicitly opposed to antisemitism and nationalism. Through her published editions, Nietzsche’s work became associated with fascism and Nazism. But scholars contested this interpretation, and corrected editions of his writings were soon made available. Nietzsche’s thought enjoyed renewed popularity in the 1960s and his ideas have since had a profound impact on 20th and early-21st century thinkers across philosophy—especially in schools of continental philosophy such as existentialism, postmodernism and post-structuralism—as well as in art, literature, psychology, politics, and popular culture.

source

Written by (Roughly) Daily

October 15, 2020 at 1:01 am

%d bloggers like this: