(Roughly) Daily

Posts Tagged ‘AI

“Prediction is very difficult, especially if it’s about the future”*…

… but maybe not as hard as it once was. While multi-agent artificial intelligence was first used in the sixties, advances in technology have made it an extremely sophisticated modeling– and prediction– tool. As Derek Beres explains, it can be a powerfully-accurate prediction engine… and it can potentially also be an equally powerful tool for manipulation…

The debate over free will is ancient, yet data don’t lie — and we have been giving tech companies access to our deepest secrets… We like to believe we’re not predictable, but that’s simply not true…

Multi-agent artificial intelligence (MAAI) is predictive modeling at its most advanced. It has been used for years to create digital societies that mimic real ones with stunningly accurate results. In an age of big data, there exists more information about our habits — political, social, fiscal — than ever before. As we feed them information on a daily basis, their ability to predict the future is getting better.

[And] given the current political climate around the planet… MAAI will most certainly be put to insidious means. With in-depth knowledge comes plenty of opportunities for exploitation and manipulation, no deepfake required. The intelligence might be artificial, but the target audience most certainly is not…

Move over deepfakes; multi-agent artificial intelligence is poised to manipulate your mind: “Can AI simulations predict the future?,” from @derekberes at @bigthink.

[Image above: source]

* Niels Bohr


As we analyze augury, we might note that today is National Computer Security Day. It was inaugurated by the Association for Computing Machinery (ACM) in 1988, shortly after an attack on ARPANET (the forerunner of the internet as we know it) that damaged several of the connected machines. Meant to call attention to the need for constant need for attention to security, it’s a great day to change all of one’s passwords.


Written by (Roughly) Daily

November 30, 2022 at 1:00 am

“The intelligence of the universe is social”*…

From the series Neural Zoo by Sofia Crespo

Recently, (Roughly) Daily looked at AI and our (that’s to say, humans’) possible relationships to it. In a consideration of Jame Bridle‘s new book, Ways of Being, Doug Bierend widens the iris, considering our relationship not only to intelligences we might create but also to those with which we already co-habit…

It’s lonely at the top, but it doesn’t have to be. We humans tend to see ourselves as the anointed objects of evolution, our intelligence representing the leading edge of unlikely order cultivated amid an entropic universe. While there is no way to determine any purpose or intention behind the processes that produced us, let alone where they will or should lead, that hasn’t stopped some from making assertions. 

For example, consider the school of thought called longtermism, explored by Phil Torres in this essay for Aeon. Longtermism — a worldview held, as Torres notes, by some highly influential people including Elon Musk, Peter Thiel, tech entrepreneur Jaan Tallinn, and Jason Matheny, President Biden’s deputy assistant for technology and national security — essentially sees the prime directive of Homo sapiens as one of maximizing the “potential” of our species. That potential — often defined along such utilitarian lines as maximizing the population, distribution, longevity, and comfort that future humans could achieve over the coming millennia — is what longtermers say should drive the decisions we make today. Its most extreme version represents a kind of interstellar manifest destiny, human exceptionalism on the vastest possible scale. The stars are mere substrate for the extension and preservation of our species’ putatively unique gifts. Some fondly imagine our distant descendants cast throughout the universe in womb-like symbiosis with machines, ensconced in virtual environments enjoying perpetual states of bliss —The Matrix as utopia. 

Longtermist philosophy also overlaps with the “transhumanist” line of thought, articulated by figures such as philosopher Nick Bostrom, who describes human nature as incomplete, “a half-baked beginning that we can learn to remold in desirable ways.” Here, humanity as currently or historically constituted isn’t an end so much as a means of realizing some far greater fate. Transhumanism espouses the possibility of slipping the surly bonds of our limited brains and bodies to become “more than human,” in a sense reminiscent of fictional android builder Eldon Tyrell in Blade Runner: “Commerce is our goal,” Tyrell boasts. “‘More human than human’ is our motto.” Rather than celebrating and deepening our role within the world that produced us, these outlooks seek to exaggerate and consummate a centuries-long process of separation historically enabled by the paired forces of technology and capital. 

But this is not the only possible conception of the more than human. In their excellent new book Ways of Being, James Bridle also invokes the “more than human,” not as an effort to exceed our own limitations through various forms of enhancement but as a mega-category that collects within it essentially everything, from microbes and plants to water and stone, even machines. It is a grouping so vast and diverse as to be indefinable, which is part of Bridle’s point: The category disappears, and the interactions within it are what matters. More-than-human, in this usage, dismisses human exceptionalism in favor of recognizing the ecological nature of our existence, the co-construction of our lives, futures, and minds with the world itself. 

From this point of view, human intelligence is just one form of a more universal phenomenon, an emergent “flowering” found all throughout the evolutionary tree. It is among the tangled bramble of all life that our intelligence becomes intelligible, a gestalt rather than a particular trait. As Bridle writes, “intelligence is not something which exists, but something one does. It is active, interpersonal and generative, and it manifests when we think and act.” In Bridle’s telling, mind and meaning alike exist by way of relationship with everything else in the world, living or not. Accepting this, it makes little sense to elevate human agency and priorities above all others. If our minds are exceptional, it is still only in terms of their relationship to everything else that acts within the world. That is, our minds, like our bodies, aren’t just ours; they are contingent on everything else, which would suggest that the path forward should involve moving with the wider world rather than attempting to escape or surpass it.

This way of thinking borrows heavily from Indigenous concepts and cosmologies. It decenters human perspective and priorities, instead setting them within an infinite concatenation of agents engaged in the collective project of existence. No one viewpoint is more favored than another, not even of the biological over the mineral or mechanical. It is an invitation to engage with the “more-than-human” world not as though it consisted of objects but rather fellow subjects. This would cut against the impulse to enclose and conquer nature, which has been reified by our very study of it….

Technology often presupposes human domination, but it could instead reflect our ecological dependence: “Entangled Intelligence,” from @DougBierend in @_reallifemag (via @inevernu and @sentiers). Eminently worth reading in full.

* Marcus Aurelius, The Meditations


As we welcome fellow travelers, we might recall that this date in 1752 was the final day of use of the Julian calendar in Great Britain, Ireland, and the British colonies, including those on the East coast of America. Eleven days were skipped to sync to the Gregorian calendar, which was designed to realign the calendar with equinoxes. Hence the following day was September 14. (Most of Europe had shifted, by Papal decree, to the Gregorian calendar in the 16th century; Russia and China made the move in the 20th century.)


Written by (Roughly) Daily

September 2, 2022 at 1:00 am

“It takes something more than intelligence to act intelligently”*…

AI isn’t human, but that doesn’t mean, Nathan Gardels argues (citing three recent essays in Noema, the magazine that he edits), that it cannot be intelligent…

As the authors point out, “the dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data.”

Critics of this approach argue that its “insurmountable wall” is “symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.).”

Such reasoning would enable logical inferences that can apply what has been learned to unprogrammed contingencies, thus “completing patterns” by connecting the dots. LeCun and Browning argue that, as with the evolution of the human mind itself, in time and with manifold experiences, this ability may emerge as well from the neural networks of intelligent machines.

“Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach,” they contend. “They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoning, compositionality, multilingual competency, some logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.”

The philosopher Charles Taylor associates the breakthroughs of consciousness in that era with the arrival of written language. In his view, access to the stored memories of this first cloud technology enabled the interiority of sustained reflection from which symbolic competencies evolved.

This “transcendence” beyond oral narrative myth narrowly grounded in one’s own immediate circumstance and experience gave rise to what the sociologist Robert Bellah called “theoretic culture” — a mental organization of the world at large into the abstraction of symbols. The universalization of abstraction, in turn and over a long period of time, enabled the emergence of systems of thought ranging from monotheistic religions to the scientific reasoning of the Enlightenment.

Not unlike the transition from oral to written culture, might AI be the midwife to the next step of evolution? As has been written in this column before, we have only become aware of climate change through planetary computation that abstractly models the Earthly organism beyond what any of us could conceive out of our own un-encompassing knowledge or direct experience.

For Bratton and Agüera y Arcas, it comes down in the end to language as the “cognitive infrastructure” that can comprehend patterns, referential context and the relationality among them when facing novel events.

“There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real-world referents,” they observe.

As an “executable” translation of human language, code does not produce the same kind of intelligence that emerges from human consciousness, but is intelligence nonetheless. What is most likely to emerge in their view is not “artificial” intelligence when machines become more human, but “synthetic” intelligence, which fuses both.

As AI further develops through human prompt or a capacity to guide its own evolution by acquiring a sense of itself in the world, what is clear is that it is well on the way to taking its place alongside, perhaps conjoining and becoming synthesized with, other intelligences, from homo sapiens to insects to forests to the planetary organism itself…

AI takes its place among and may conjoin with other multiple intelligences: “Cognizant Machines: A What Is Not A Who.” Eminentl worth reading in full both the linked essay and the articles referenced in it.

* Dostoyevsky, Crime and Punishment


As we make room for company, we might recall that it was on this date in 1911 that a telegraph operator in the 7th floor of The New York Times headquarters in Times Square sent a message– “This message sent around the world”– that left at 7:00p, traveled over 28,000 miles, and was relayed by 16 different operators. It arrived back at the Times only 16.5 minutes later.

The “around the world telegraphy” record had been set in 1903, when President Roosevelt celebrated the completion of the Commercial Pacific Cable by sending the first round-the-world message in just 9 minutes. But that message had been given priority status; the Times wanted to see how long a regular message would take — and what route it would follow.

The building from which the message originated is now called One Times Square and is best known as the site of the New Year’s Eve ball drop.


Written by (Roughly) Daily

August 20, 2022 at 1:00 am

“The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust.”*…

Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity — a place where extrapolation breaks down and new models must be applied — and the world will pass beyond our understanding.

Vernor Vinge, True Names and Other Dangers

The once-vibrant transhumanist movement doesn’t capture as much attention as it used to; but as George Dvorsky explains, its ideas are far from dead. Indeed, they helped seed the Futurist movements that are so prominent today (and here and here)…

[On the heels of 9/11] transhumanism made a lot of sense to me, as it seemed to represent the logical next step in our evolution, albeit an evolution guided by humans and not Darwinian selection. As a cultural and intellectual movement, transhumanism seeks to improve the human condition by developing, promoting, and disseminating technologies that significantly augment our cognitive, physical, and psychological capabilities. When I first stumbled upon the movement, the technological enablers of transhumanism were starting to come into focus: genomics, cybernetics, artificial intelligence, and nanotechnology. These tools carried the potential to radically transform our species, leading to humans with augmented intelligence and memory, unlimited lifespans, and entirely new physical and cognitive capabilities. And as a nascent Buddhist, it meant a lot to me that transhumanism held the potential to alleviate a considerable amount of suffering through the elimination of disease, infirmary, mental disorders, and the ravages of aging.

The idea that humans would transition to a posthuman state seemed both inevitable and desirable, but, having an apparently functional brain, I immediately recognized the potential for tremendous harm.

The term “transhumanism” popped into existence during the 20th century, but the idea has been around for a lot longer than that.

The quest for immortality has always been a part of our history, and it probably always will be. The Mesopotamian Epic of Gilgamesh is the earliest written example, while the Fountain of Youth—the literal Fountain of Youth—was the obsession of Spanish explorer Juan Ponce de León.

Notions that humans could somehow be modified or enhanced appeared during the European Enlightenment of the 18th century, with French philosopher Denis Diderot arguing that humans might someday redesign themselves into a multitude of types “whose future and final organic structure it’s impossible to predict,” as he wrote in D’Alembert’s Dream

The Russian cosmists of the late 19th and early 20th centuries foreshadowed modern transhumanism, as they ruminated on space travel, physical rejuvenation, immortality, and the possibility of bringing the dead back to life, the latter being a portend to cryonics—a staple of modern transhumanist thinking. From the 1920s through to the 1950s, thinkers such as British biologist J. B. S. Haldane, Irish scientist J. D. Bernal, and British biologist Julian Huxley (who popularized the term “transhumanism” in a 1957 essay) were openly advocating for such things as artificial wombs, human clones, cybernetic implants, biological enhancements, and space exploration.

It wasn’t until the 1990s, however, that a cohesive transhumanist movement emerged, a development largely brought about by—you guessed it—the internet…

[There follows a brisk and helpful history of transhumanist thought, then an account of the recent past, and present…]

Some of the transhumanist groups that emerged in the 1990s and 2000s still exist or evolved into new forms, and while a strong pro-transhumanist subculture remains, the larger public seems detached and largely disinterested. But that’s not to say that these groups, or the transhumanist movement in general, didn’t have an impact…

“I think the movements had mainly an impact as intellectual salons where blue-sky discussions made people find important issues they later dug into professionally,” said Sandberg. He pointed to Oxford University philosopher and transhumanist Nick Bostrom, who “discovered the importance of existential risk for thinking about the long-term future,” which resulted in an entirely new research direction. The Center for the Study of Existential Risk at the University of Cambridge and the Future of Humanity Institute at Oxford are the direct results of Bostrom’s work. Sandberg also cited artificial intelligence theorist Eliezer Yudkowsky, who “refined thinking about AI that led to the AI safety community forming,” and also the transhumanist “cryptoanarchists” who “did the groundwork for the cryptocurrency world,” he added. Indeed, Vitalik Buterin, a co-founder of Ethereum, subscribes to transhumanist thinking, and his father, Dmitry, used to attend our meetings at the Toronto Transhumanist Association…

Intellectual history: “What Ever Happened to the Transhumanists?,” from @dvorsky.

See also: “The Heaven of the Transhumanists” from @GenofMod (source of the image above).

Donna Haraway


As we muse on mortality, we might send carefully-calculated birthday greetings to Marvin Minsky; he was born on this date in 1927.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.


“All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason.”*…

Descartes, the original (modern) Rationalist and Immanuel Kant, who did his best to synthesize Descartes’ thought with empiricism (a la Hume)

As Robert Cottrell explains, a growing group of online thinkers couldn’t agree more…

Much of the best new writing online originates from activities in the real world — music, fine art, politics, law…

But there is also writing which belongs primarily to the world of the Internet, by virtue of its subject-matter and of its sensibility. In this category I would place the genre that calls itself Rationalism, the raw materials of which are cognitive science and mathematical logic.

I will capitalise Rationalism and Rationalists when referring to the writers and thinkers who are connected in one way or another with the Less Wrong forum (discussed below). I will do this to avoid confusion with the much broader mass of small-r “rational” thinkers — most of us, in fact — who believe their thinking to be founded on reasoning of some sort; and with “rationalistic” thinkers, a term used in the social sciences for people who favour the generalised application of scientific methods.

Capital-R Rationalism contends that there are specific techniques, drawn mainly from probability theory, by means of which people can teach themselves to think better and to act better — where “better” is intended not as a moral judgement but as a measure of efficiency. Capital-R Rationalism contends that, by recognising and eliminating biases common in human judgement, one can arrive at a more accurate view of the world and a more accurate view of one’s actions within it. When thus equipped with a more exact view of the world and of ourselves, we are far more likely to know what we want and to know how to get it.

Rationalism does not try to substitute for morality. It stops short of morality. It does not tell you how to feel about the truth once you think you have found it. By stopping short of morality it has the best of both worlds: It provides a rich framework for thought and action from which, in principle, one might advance, better equipped, into metaphysics. But the richness and complexity of deciding how to act Rationally in the world is such that nobody, having seriously committed to Rationalism, is ever likely to emerge on the far side of it.

The influence of Rationalism today is, I would say, comparable with that of existentialism in the mid-20th century. It offers a way of thinking and a guide to action with particular attractions for the intelligent, the dissident, the secular and the alienated. In Rationalism it is perfectly reasonable to contend that you are right while the World is wrong.

Rationalism is more of an applied than a pure discipline, so its effects are felt mainly in fields where its adepts tend to be concentrated. By far the highest concentration of Rationalists would appear to cohabit in the study and development of artificial intelligence; so it hardly surprising that main fruit of Rationalism to date has been the birth of a new academic field, existential risk studies, born of a convergence between Rationalism and AI, with science fiction playing catalytic role. Leading figures in existential risk studies include Nicholas Bostrom at Oxford University and Jaan Tallinn at Cambridge University.

Another relatively new field, effective altruism, has emerged from a convergence of Rationalism and Utilitarianism, with the philosopher Peter Singer as catalyst. The leading figures in effective altruism, besides Singer, are Toby Ord, author of The Precipice; William MacAskill, author of Doing Good Better; and Holden Karnofsky, co-founder of GiveWell and blogger at Cold Takes.

A third new field, progress studies, has emerged very recently from the convergence of Rationalism and economics, with Tyler Cowen and Patrick Collison as its founding fathers. Progress studies seeks to identify, primarily from the study of history, the preconditions and factors which underpin economic growth and technological innovation, and to apply these insights in concrete ways to the promotion of future prosperity. The key text of progress studies is Cowen’s Stubborn Attachments

I doubt there is any wholly original scientific content to Rationalism: It is a taker of facts from other fields, not a contributor to them. But by selecting and prioritising ideas which play well together, by dramatising them in the form of thought experiments, and by pursuing their applications to the limits of possibility (which far exceed the limits of common sense), Rationalism has become a contributor to the philosophical fields of logic and metaphysics and to conceptual aspects of artificial intelligence.

Tyler Cowen is beloved of Rationalists but would hesitate (I think) to identify with them. His attitude towards cognitive biases is more like that of Chesterton towards fences: Before seeking to remove them you should be sure that you understand why they were put there in the first place…

From hands-down the best guide I’ve found to the increasingly-impactful ideas at work in Rationalism and its related fields, and to the thinkers behind them: “Do the Right Thing,” from @robertcottrell in @TheBrowser. Eminently worth reading in full.

[Image above: source]

* Immanuel Kant, Critique of Pure Reason


As we ponder precepts, we might recall that it was on this date in 1937 that Hormel went public with its own exercise in recombination when it introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.


Written by (Roughly) Daily

July 5, 2022 at 1:00 am

%d bloggers like this: