(Roughly) Daily

Posts Tagged ‘artificial intelligence

“I would rather have questions that can’t be answered than answers that can’t be questioned”*…

… or, as Confucius would have it, “real knowledge is to know the extent of one’s ignorance.” Happily Wikenigma is here to help…

Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.

Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [949 so far]

That’s to say, a compendium of so-called ‘Known Unknowns’…

Consider, for example…

How do marine turtle accurately migrate thousands of kilometers for nesting?

Can Beal’s conjecture be proved?

Can one solve the “envelope paradox”?

Do “naked singularities” exist?

What is the etymology of the word “plot” (which appears only in English)?

What were the purposes of “Perforated Batons,” man-made historical artifacts formed from deer antlers, dating back 12,000-24,000 years and found widely across Europe?

What are the function, importance, and evolutionary history of human “inner speech”?

One could– and should– go on: Wikenigma, via @Recomendo6.

* Richard Feynman

###

As we wonder, we might spare a thought for a man who embodied curiosity, Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

source

Written by (Roughly) Daily

January 24, 2023 at 1:00 am

“Poetry might be defined as the clear expression of mixed feelings”*…

Can artificial intelligence have those feelings? Scientist and poet Keith Holyoak explores:

… Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?…

A provocative consideration: “Can AI Write Authentic Poetry?@mitpress.

Apposite: a fascinating Twitter thread on “why GPT3 algorithm proficiency at producing fluent, correct-seeming prose is an exciting opportunity for improving how we teach writing, how students learn to write, and how this can also benefit profs who assign writing, but don’t necessarily teach it.”

* W. H. Auden

###

As we ruminate on rhymes, we might send thoughtful birthday greetings to Michael Gazzaniga; he was born on this date in 1939. A leading researcher in cognitive neuroscience (the study of the neural basis of mind), his work has focused on how the brain enables humans to perform those advanced mental functions that are generally associated with what we call “the mind.” Gazzaniga has made significant contributions to the emerging understanding of how the brain facilitates such higher cognitive functions as remembering, speaking, interpreting, and making judgments.

source

Written by (Roughly) Daily

December 12, 2022 at 1:00 am

“Prediction is very difficult, especially if it’s about the future”*…

… but maybe not as hard as it once was. While multi-agent artificial intelligence was first used in the sixties, advances in technology have made it an extremely sophisticated modeling– and prediction– tool. As Derek Beres explains, it can be a powerfully-accurate prediction engine… and it can potentially also be an equally powerful tool for manipulation…

The debate over free will is ancient, yet data don’t lie — and we have been giving tech companies access to our deepest secrets… We like to believe we’re not predictable, but that’s simply not true…

Multi-agent artificial intelligence (MAAI) is predictive modeling at its most advanced. It has been used for years to create digital societies that mimic real ones with stunningly accurate results. In an age of big data, there exists more information about our habits — political, social, fiscal — than ever before. As we feed them information on a daily basis, their ability to predict the future is getting better.

[And] given the current political climate around the planet… MAAI will most certainly be put to insidious means. With in-depth knowledge comes plenty of opportunities for exploitation and manipulation, no deepfake required. The intelligence might be artificial, but the target audience most certainly is not…

Move over deepfakes; multi-agent artificial intelligence is poised to manipulate your mind: “Can AI simulations predict the future?,” from @derekberes at @bigthink.

[Image above: source]

* Niels Bohr

###

As we analyze augury, we might note that today is National Computer Security Day. It was inaugurated by the Association for Computing Machinery (ACM) in 1988, shortly after an attack on ARPANET (the forerunner of the internet as we know it) that damaged several of the connected machines. Meant to call attention to the need for constant need for attention to security, it’s a great day to change all of one’s passwords.

source

Written by (Roughly) Daily

November 30, 2022 at 1:00 am

“The intelligence of the universe is social”*…

From the series Neural Zoo by Sofia Crespo

Recently, (Roughly) Daily looked at AI and our (that’s to say, humans’) possible relationships to it. In a consideration of Jame Bridle‘s new book, Ways of Being, Doug Bierend widens the iris, considering our relationship not only to intelligences we might create but also to those with which we already co-habit…

It’s lonely at the top, but it doesn’t have to be. We humans tend to see ourselves as the anointed objects of evolution, our intelligence representing the leading edge of unlikely order cultivated amid an entropic universe. While there is no way to determine any purpose or intention behind the processes that produced us, let alone where they will or should lead, that hasn’t stopped some from making assertions. 

For example, consider the school of thought called longtermism, explored by Phil Torres in this essay for Aeon. Longtermism — a worldview held, as Torres notes, by some highly influential people including Elon Musk, Peter Thiel, tech entrepreneur Jaan Tallinn, and Jason Matheny, President Biden’s deputy assistant for technology and national security — essentially sees the prime directive of Homo sapiens as one of maximizing the “potential” of our species. That potential — often defined along such utilitarian lines as maximizing the population, distribution, longevity, and comfort that future humans could achieve over the coming millennia — is what longtermers say should drive the decisions we make today. Its most extreme version represents a kind of interstellar manifest destiny, human exceptionalism on the vastest possible scale. The stars are mere substrate for the extension and preservation of our species’ putatively unique gifts. Some fondly imagine our distant descendants cast throughout the universe in womb-like symbiosis with machines, ensconced in virtual environments enjoying perpetual states of bliss —The Matrix as utopia. 

Longtermist philosophy also overlaps with the “transhumanist” line of thought, articulated by figures such as philosopher Nick Bostrom, who describes human nature as incomplete, “a half-baked beginning that we can learn to remold in desirable ways.” Here, humanity as currently or historically constituted isn’t an end so much as a means of realizing some far greater fate. Transhumanism espouses the possibility of slipping the surly bonds of our limited brains and bodies to become “more than human,” in a sense reminiscent of fictional android builder Eldon Tyrell in Blade Runner: “Commerce is our goal,” Tyrell boasts. “‘More human than human’ is our motto.” Rather than celebrating and deepening our role within the world that produced us, these outlooks seek to exaggerate and consummate a centuries-long process of separation historically enabled by the paired forces of technology and capital. 

But this is not the only possible conception of the more than human. In their excellent new book Ways of Being, James Bridle also invokes the “more than human,” not as an effort to exceed our own limitations through various forms of enhancement but as a mega-category that collects within it essentially everything, from microbes and plants to water and stone, even machines. It is a grouping so vast and diverse as to be indefinable, which is part of Bridle’s point: The category disappears, and the interactions within it are what matters. More-than-human, in this usage, dismisses human exceptionalism in favor of recognizing the ecological nature of our existence, the co-construction of our lives, futures, and minds with the world itself. 

From this point of view, human intelligence is just one form of a more universal phenomenon, an emergent “flowering” found all throughout the evolutionary tree. It is among the tangled bramble of all life that our intelligence becomes intelligible, a gestalt rather than a particular trait. As Bridle writes, “intelligence is not something which exists, but something one does. It is active, interpersonal and generative, and it manifests when we think and act.” In Bridle’s telling, mind and meaning alike exist by way of relationship with everything else in the world, living or not. Accepting this, it makes little sense to elevate human agency and priorities above all others. If our minds are exceptional, it is still only in terms of their relationship to everything else that acts within the world. That is, our minds, like our bodies, aren’t just ours; they are contingent on everything else, which would suggest that the path forward should involve moving with the wider world rather than attempting to escape or surpass it.

This way of thinking borrows heavily from Indigenous concepts and cosmologies. It decenters human perspective and priorities, instead setting them within an infinite concatenation of agents engaged in the collective project of existence. No one viewpoint is more favored than another, not even of the biological over the mineral or mechanical. It is an invitation to engage with the “more-than-human” world not as though it consisted of objects but rather fellow subjects. This would cut against the impulse to enclose and conquer nature, which has been reified by our very study of it….

Technology often presupposes human domination, but it could instead reflect our ecological dependence: “Entangled Intelligence,” from @DougBierend in @_reallifemag (via @inevernu and @sentiers). Eminently worth reading in full.

* Marcus Aurelius, The Meditations

###

As we welcome fellow travelers, we might recall that this date in 1752 was the final day of use of the Julian calendar in Great Britain, Ireland, and the British colonies, including those on the East coast of America. Eleven days were skipped to sync to the Gregorian calendar, which was designed to realign the calendar with equinoxes. Hence the following day was September 14. (Most of Europe had shifted, by Papal decree, to the Gregorian calendar in the 16th century; Russia and China made the move in the 20th century.)

source

Written by (Roughly) Daily

September 2, 2022 at 1:00 am

“It was orderly, like the universe. It had logic. It was dependable. Using it allowed a kind of moral uplift, as one’s own chaos was also brought under control.”*…

(Roughly) Daily has looked before at the history of the filing cabinet, rooted in the work of Craig Robertson (@craig2robertson). He has deepened his research and published a new book, The Filing Cabinet: A Vertical History of Information. An Xiao Mina offers an appreciation– and a consideration of one of the central questions it raises: can emergent knowledge coexist with an internet that privileges the kind “certainty” that’s implicit in the filing paradigm that was born with the filing cabinet and that informs our “knowledge systems” today…

… The 20th century saw an emergent information paradigm shaped by corporate capitalism, which emphasized maximizing profit and minimizing the time workers spent on tasks. Offices once kept their information in books—think Ebenezer Scrooge with his quill pen, updating his thick ledger on Christmas. The filing cabinet changed all that, encouraging what Robertson calls “granular certainty,” or “the drive to break more and more of life and its everyday routines into discrete, observable, and manageable parts.” This represented an important conceptualization: Information became a practical unit of knowledge that could be standardized, classified, and effortlessly stored and retrieved.

Take medical records, which require multiple layers of organization to support routine hospital business. “At the Bryn Mawr Hospital,” Robertson writes, “six different card files provided access to patient information: an alphabetical file of admission cards for discharged patients, an alphabetical file for the accident ward, a file to record all operations, a disease file, a diagnostic file, and a doctors’ file that recorded the number of patients each physician referred to the hospital.” The underlying logic of this system was that the storage of medical records didn’t just keep them safe; it made sure that those records could be accessed easily.

Robertson’s deep focus on the filing cabinet grounds the book in history and not historical analogy. He touches very little on Big Data and indexing and instead dives into the materiality of the filing cabinet and the principles of information management that guided its evolution. But students of technology and information studies will immediately see this history shaping our world today…

[And] if the filing cabinet, as a tool of business and capital, guides how we access digital information today, its legacy of certainty overshadows the messiness intrinsic to acquiring knowledge—the sort that requires reflection, contextualization, and good-faith debate. Ask the internet difficult questions with complex answers—questions of philosophy, political science, aesthetics, perception—and you’ll get responses using the same neat little index cards with summaries of findings. What makes for an ethical way of life? What is the best English-language translation of the poetry of Borges? What are the long-term effects of social inequalities, and how do we resolve them? Is it Yanny or Laurel?

Information collection and distribution today tends to follow the rigidity of cabinet logic to its natural extreme, but that bias leaves unattended more complex puzzles. The human condition inherently demands a degree of comfort with uncertainty and ambiguity, as we carefully balance incomplete and conflicting data points, competing value systems, and intricate frameworks to arrive at some form of knowing. In that sense, the filing cabinet, despite its deep roots in our contemporary information architecture, is just one step in our epistemological journey, not its end…

A captivating new history helps us see a humble appliance’s sweeping influence on modern life: “The Logic of the Filing Cabinet Is Everywhere.”

* Jeanette Winterson, Why Be Happy When You Could Be Normal?

###

As we store and retrieve, we might recall that it was on this date in 19955 that the term “artificial intelligence” was coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place at Dartmouth a year later, in July and August 1956, is generally recognized as the official birth date of the new field. 

Dartmouth Conference attendees: Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky)

source

%d bloggers like this: