(Roughly) Daily

Posts Tagged ‘cognition

“Poetry might be defined as the clear expression of mixed feelings”*…

Can artificial intelligence have those feelings? Scientist and poet Keith Holyoak explores:

… Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?…

A provocative consideration: “Can AI Write Authentic Poetry?@mitpress.

Apposite: a fascinating Twitter thread on “why GPT3 algorithm proficiency at producing fluent, correct-seeming prose is an exciting opportunity for improving how we teach writing, how students learn to write, and how this can also benefit profs who assign writing, but don’t necessarily teach it.”

* W. H. Auden

###

As we ruminate on rhymes, we might send thoughtful birthday greetings to Michael Gazzaniga; he was born on this date in 1939. A leading researcher in cognitive neuroscience (the study of the neural basis of mind), his work has focused on how the brain enables humans to perform those advanced mental functions that are generally associated with what we call “the mind.” Gazzaniga has made significant contributions to the emerging understanding of how the brain facilitates such higher cognitive functions as remembering, speaking, interpreting, and making judgments.

source

Written by (Roughly) Daily

December 12, 2022 at 1:00 am

“The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office”*…

For as long as humans have thought, humans have thought about thinking. George Cave on the power and the limits of the metaphors we’ve used to do that…

For thousands of years, humans have described their understanding of intelligence with engineering metaphors. In the 3rd century BCE, the invention of hydraulics popularized the model of fluid flow (“humours”) in the body. This lasted until the 1500s, supplanted by the invention of automata and the idea of humans as complex machines. From electrical and chemical metaphors in the 1700s to advances in communications a century later, each metaphor reflected the most advanced thinking of that era. Today is no different: we talk of brains that store, process and retrieve memories, mirroring the language of computers.

I’ve always believed metaphors to be helpful and productive in communicating unfamiliar concepts. But this fascinating history of cognitive science metaphors shows that flawed metaphors can take hold and limit the scope for alternative ideas. In the worst case, the EU spent 10 years and $1.3 billion building a model of the brain based on the incorrect belief that the brain functions like a computer…

Thinking about thinking, from @George_Cave in @the_prepared.

Apposite: “Finding Language in the Brain.”

* Robert Frost

###

As we cogitate on cognition, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

Written by (Roughly) Daily

December 9, 2022 at 1:00 am

“There is nothing more tentative, nothing more empirical (superficially, at least) than the process of establishing an order among things; nothing that demands a sharper eye or a surer, better-articulated language”*

James Vincent on the emergence of earliest writing and its impact on culture, with special attention to the phenomenon of the “list” and its role in the birth of metrology…

Measurement was a crucial organizing principle in ancient Egypt, but metrology itself does not begin with nilometers. To understand its place in human culture, we have to trace its roots back further, to the invention of writing itself. For without writing, no measures can be recorded. The best evidence suggests that the written word was created independently thousands of years ago by a number of different cultures scattered around the world: in Mesopotamia, Mesoamerica, China, and Egypt. But it’s in Mesopotamia—present-day Iraq—where the practice is thought to have been invented first.

There’s some debate over whether this invention of writing enabled the first states to emerge, giving their rulers the ability to oversee and allocate resources, or whether it was the demands of the early states that in turn led to the invention of writing. Either way, the scribal arts offered dramatic new ways to process knowledge, allowing for not only superior organization, but also superior thinking. Some scholars argue that the splitting of noun and number on clay tablets didn’t just allow kings to better track their taxes but was tantamount to a cognitive revolution: a leap forward that allowed humans to abstract and categorize the world around them like never before.

Lists may not seem like cognitive dynamite, but their proliferation appears to have helped develop new modes of thought in early societies, encouraging us to think analytically about the world. “The list relies on discontinuity rather than continuity,” writes anthropologist Jack Goody. “[I]t encourages the ordering of the items, by number, by initial sound, by category, etc. And the existence of boundaries, external and internal, brings greater visibility to categories, at the same time as making them more abstract.”…

More at: “What If… Listicles Are Actually an Ancient Form of Writing and Narrative?” from @jjvincent in @lithub

* Michel Foucault

###

As we organize, we might recall that it was on this date in 1872 that the Mary Celeste (often erroneously referred to as Marie Celeste, per a Conan Doyle short story about the ship), an American-registered merchant brigantine, was discovered adrift and deserted in the Atlantic Ocean off the Azores Islands.

The Canadian brigantine Dei Gratia found her in a dishevelled but seaworthy condition under partial sail and with her lifeboat missing. The last entry in her log was dated ten days earlier. She had left New York City for Genoa on November 7 and was still amply provisioned when found. Her cargo of alcohol was intact, and the captain’s and crew’s personal belongings were undisturbed. None of those who had been on board were ever seen or heard from again.

At the salvage hearings in Gibraltar following her recovery, the court’s officers considered various possibilities of foul play, including mutiny by Mary Celeste‘s crew, piracy by the Dei Gratia crew or others, and conspiracy to carry out insurance or salvage fraud. No convincing evidence supported these theories, but unresolved suspicions led to a relatively low salvage award.

The inconclusive nature of the hearings fostered continued speculation as to the nature of the mystery. Hypotheses that have been advanced include the effects on the crew of alcohol fumes rising from the cargo, submarine earthquakes, waterspouts, attack by a giant squid, and paranormal intervention.

After the Gibraltar hearings, Mary Celeste continued in service under new owners. In 1885, her captain deliberately wrecked her off the coast of Haiti as part of an attempted insurance fraud.

The ship in 1861 (source)

Written by (Roughly) Daily

December 4, 2022 at 1:00 am

“It takes something more than intelligence to act intelligently”*…

AI isn’t human, but that doesn’t mean, Nathan Gardels argues (citing three recent essays in Noema, the magazine that he edits), that it cannot be intelligent…

As the authors point out, “the dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data.”

Critics of this approach argue that its “insurmountable wall” is “symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.).”

Such reasoning would enable logical inferences that can apply what has been learned to unprogrammed contingencies, thus “completing patterns” by connecting the dots. LeCun and Browning argue that, as with the evolution of the human mind itself, in time and with manifold experiences, this ability may emerge as well from the neural networks of intelligent machines.

“Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach,” they contend. “They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoning, compositionality, multilingual competency, some logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.”

The philosopher Charles Taylor associates the breakthroughs of consciousness in that era with the arrival of written language. In his view, access to the stored memories of this first cloud technology enabled the interiority of sustained reflection from which symbolic competencies evolved.

This “transcendence” beyond oral narrative myth narrowly grounded in one’s own immediate circumstance and experience gave rise to what the sociologist Robert Bellah called “theoretic culture” — a mental organization of the world at large into the abstraction of symbols. The universalization of abstraction, in turn and over a long period of time, enabled the emergence of systems of thought ranging from monotheistic religions to the scientific reasoning of the Enlightenment.

Not unlike the transition from oral to written culture, might AI be the midwife to the next step of evolution? As has been written in this column before, we have only become aware of climate change through planetary computation that abstractly models the Earthly organism beyond what any of us could conceive out of our own un-encompassing knowledge or direct experience.

For Bratton and Agüera y Arcas, it comes down in the end to language as the “cognitive infrastructure” that can comprehend patterns, referential context and the relationality among them when facing novel events.

“There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real-world referents,” they observe.

As an “executable” translation of human language, code does not produce the same kind of intelligence that emerges from human consciousness, but is intelligence nonetheless. What is most likely to emerge in their view is not “artificial” intelligence when machines become more human, but “synthetic” intelligence, which fuses both.

As AI further develops through human prompt or a capacity to guide its own evolution by acquiring a sense of itself in the world, what is clear is that it is well on the way to taking its place alongside, perhaps conjoining and becoming synthesized with, other intelligences, from homo sapiens to insects to forests to the planetary organism itself…

AI takes its place among and may conjoin with other multiple intelligences: “Cognizant Machines: A What Is Not A Who.” Eminentl worth reading in full both the linked essay and the articles referenced in it.

* Dostoyevsky, Crime and Punishment

###

As we make room for company, we might recall that it was on this date in 1911 that a telegraph operator in the 7th floor of The New York Times headquarters in Times Square sent a message– “This message sent around the world”– that left at 7:00p, traveled over 28,000 miles, and was relayed by 16 different operators. It arrived back at the Times only 16.5 minutes later.

The “around the world telegraphy” record had been set in 1903, when President Roosevelt celebrated the completion of the Commercial Pacific Cable by sending the first round-the-world message in just 9 minutes. But that message had been given priority status; the Times wanted to see how long a regular message would take — and what route it would follow.

The building from which the message originated is now called One Times Square and is best known as the site of the New Year’s Eve ball drop.

source

Written by (Roughly) Daily

August 20, 2022 at 1:00 am

“The past, like the future, is indefinite and exists only as a spectrum of possibilities”*…

A recent paper by Robert Lanza and others suggests that physical reality isn’t independent of us, “objective,” but is the product of networks of observers…

Is there physical reality that is independent of us? Does objective reality exist at all? Or is the structure of everything, including time and space, created by the perceptions of those observing it? Such is the groundbreaking assertion of a new paper published in the Journal of Cosmology and Astroparticle Physics.

The paper’s authors include Robert Lanza, a stem cell and regenerative medicine expert, famous for the theory of biocentrism, which argues that consciousness is the driving force for the existence of the universe. He believes that the physical world that we perceive is not something that’s separate from us but rather created by our minds as we observe it. According to his biocentric view, space and time are a byproduct of the “whirl of information” in our head that is weaved together by our mind into a coherent experience.

His new paper, co-authored by Dmitriy Podolskiy and Andrei Barvinsky, theorists in quantum gravity and quantum cosmology, shows how observers influence the structure of our reality.

According to Lanza and his colleagues, observers can dramatically affect “the behavior of observable quantities” both at microscopic and massive spatiotemporal scales. In fact, a “profound shift in our ordinary everyday worldview” is necessary, wrote Lanza in an interview with Big Think. The world is not something that is formed outside of us, simply existing on its own. “Observers ultimately define the structure of physical reality itself,” he stated.

How does this work? Lanza contends that a network of observers is necessary and is “inherent to the structure of reality.” As he explains, observers — you, me, and anyone else — live in a quantum gravitational universe and come up with “a globally agreed-upon cognitive model” of reality by exchanging information about the properties of spacetime. “For, once you measure something,” Lanza writes, “the wave of probability to measure the same value of the already probed physical quantity becomes ‘localized’ or simply ‘collapses.’” That’s how reality comes to be consistently real to us all. Once you keep measuring a quantity over and over, knowing the result of the first measurement, you will see the outcome to be the same.

“Similarly, if you learn from somebody about the outcomes of their measurements of a physical quantity, your measurements and those of other observers influence each other ‒ freezing the reality according to that consensus,” added Lanza, explaining further that “a consensus of different opinions regarding the structure of reality defines its very form, shaping the underlying quantum foam,” explained Lanza.

In quantum terms, an observer influences reality through decoherence, which provides the framework for collapsing waves of probability, “largely localized in the vicinity of the cognitive model which the observer builds in their mind throughout their lifespan,” he added.

Lanza says, “The observer is the first cause, the vital force that collapses not only the present, but the cascade of spatiotemporal events we call the past. Stephen Hawking was right when he said: ‘The past, like the future, is indefinite and exists only as a spectrum of possibilities.’”

Could an artificially intelligent entity without consciousness be dreaming up our world? Lanza believes biology plays an important role, as he explains in his book The Grand Biocentric Design: How Life Creates Reality, which he co-authored with the physicist Matej Pavsic.

While a bot could conceivably be an observer, Lanza thinks a conscious living entity with the capacity for memory is necessary to establish the arrow of time. “‘A brainless’ observer does not experience time and/or decoherence with any degree of freedom,” writes Lanza. This leads to the cause and effect relationships we can notice around us. Lanza thinks that “we can only say for sure that a conscious observer does indeed collapse a quantum wave function.”…

Another key aspect of their work is that it resolves “the exasperating incompatibility between quantum mechanics and general relativity,” which was a sticking point even for Albert Einstein.

The seeming incongruity of these two explanations of our physical world — with quantum mechanics looking at the molecular and subatomic levels and general relativity at the interactions between massive cosmic structures like galaxies and black holes — disappears once the properties of observers are taken into account.

While this all may sound speculative, Lanza says their ideas are being tested using Monte Carlo simulations on powerful MIT computer clusters and will soon be tested experimentally.

Is the physical universe independent from us, or is it created by our minds? “Is human consciousness creating reality?@RobertLanza

We might wonder, if this is so, how reality emerged at all. Perhaps one possibility is implied in “Consciousness was upon him before he could get out of the way.”

* Stephen Hawking

###

As we conjure with consciousness, we might recall that it was on this date in 1908 (the same year that he was awarded the Nobel Prize in Physics) that Ernest Rutherford announced in London that he had isolated a single atom of matter. The following year, he, Hans Geiger (later of “counter” fame), and Ernest Marsden conducted the “Gold Foil Experiment,” the results of which replaced J. J. Thomson‘s “Plum Pudding Model” of the atom with what became known as the “Rutherford Model“: a very small charged nucleus, containing much of the atom’s mass, orbited by low-mass electrons.

source

%d bloggers like this: