(Roughly) Daily

Posts Tagged ‘Marvin Minsky

“I would rather have questions that can’t be answered than answers that can’t be questioned”*…

… or, as Confucius would have it, “real knowledge is to know the extent of one’s ignorance.” Happily Wikenigma is here to help…

Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.

Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [949 so far]

That’s to say, a compendium of so-called ‘Known Unknowns’…

Consider, for example…

How do marine turtle accurately migrate thousands of kilometers for nesting?

Can Beal’s conjecture be proved?

Can one solve the “envelope paradox”?

Do “naked singularities” exist?

What is the etymology of the word “plot” (which appears only in English)?

What were the purposes of “Perforated Batons,” man-made historical artifacts formed from deer antlers, dating back 12,000-24,000 years and found widely across Europe?

What are the function, importance, and evolutionary history of human “inner speech”?

One could– and should– go on: Wikenigma, via @Recomendo6.

* Richard Feynman

###

As we wonder, we might spare a thought for a man who embodied curiosity, Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

source

Written by (Roughly) Daily

January 24, 2023 at 1:00 am

“It was orderly, like the universe. It had logic. It was dependable. Using it allowed a kind of moral uplift, as one’s own chaos was also brought under control.”*…

(Roughly) Daily has looked before at the history of the filing cabinet, rooted in the work of Craig Robertson (@craig2robertson). He has deepened his research and published a new book, The Filing Cabinet: A Vertical History of Information. An Xiao Mina offers an appreciation– and a consideration of one of the central questions it raises: can emergent knowledge coexist with an internet that privileges the kind “certainty” that’s implicit in the filing paradigm that was born with the filing cabinet and that informs our “knowledge systems” today…

… The 20th century saw an emergent information paradigm shaped by corporate capitalism, which emphasized maximizing profit and minimizing the time workers spent on tasks. Offices once kept their information in books—think Ebenezer Scrooge with his quill pen, updating his thick ledger on Christmas. The filing cabinet changed all that, encouraging what Robertson calls “granular certainty,” or “the drive to break more and more of life and its everyday routines into discrete, observable, and manageable parts.” This represented an important conceptualization: Information became a practical unit of knowledge that could be standardized, classified, and effortlessly stored and retrieved.

Take medical records, which require multiple layers of organization to support routine hospital business. “At the Bryn Mawr Hospital,” Robertson writes, “six different card files provided access to patient information: an alphabetical file of admission cards for discharged patients, an alphabetical file for the accident ward, a file to record all operations, a disease file, a diagnostic file, and a doctors’ file that recorded the number of patients each physician referred to the hospital.” The underlying logic of this system was that the storage of medical records didn’t just keep them safe; it made sure that those records could be accessed easily.

Robertson’s deep focus on the filing cabinet grounds the book in history and not historical analogy. He touches very little on Big Data and indexing and instead dives into the materiality of the filing cabinet and the principles of information management that guided its evolution. But students of technology and information studies will immediately see this history shaping our world today…

[And] if the filing cabinet, as a tool of business and capital, guides how we access digital information today, its legacy of certainty overshadows the messiness intrinsic to acquiring knowledge—the sort that requires reflection, contextualization, and good-faith debate. Ask the internet difficult questions with complex answers—questions of philosophy, political science, aesthetics, perception—and you’ll get responses using the same neat little index cards with summaries of findings. What makes for an ethical way of life? What is the best English-language translation of the poetry of Borges? What are the long-term effects of social inequalities, and how do we resolve them? Is it Yanny or Laurel?

Information collection and distribution today tends to follow the rigidity of cabinet logic to its natural extreme, but that bias leaves unattended more complex puzzles. The human condition inherently demands a degree of comfort with uncertainty and ambiguity, as we carefully balance incomplete and conflicting data points, competing value systems, and intricate frameworks to arrive at some form of knowing. In that sense, the filing cabinet, despite its deep roots in our contemporary information architecture, is just one step in our epistemological journey, not its end…

A captivating new history helps us see a humble appliance’s sweeping influence on modern life: “The Logic of the Filing Cabinet Is Everywhere.”

* Jeanette Winterson, Why Be Happy When You Could Be Normal?

###

As we store and retrieve, we might recall that it was on this date in 19955 that the term “artificial intelligence” was coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place at Dartmouth a year later, in July and August 1956, is generally recognized as the official birth date of the new field. 

Dartmouth Conference attendees: Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky)

source

“The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust.”*…

Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity — a place where extrapolation breaks down and new models must be applied — and the world will pass beyond our understanding.

Vernor Vinge, True Names and Other Dangers

The once-vibrant transhumanist movement doesn’t capture as much attention as it used to; but as George Dvorsky explains, its ideas are far from dead. Indeed, they helped seed the Futurist movements that are so prominent today (and here and here)…

[On the heels of 9/11] transhumanism made a lot of sense to me, as it seemed to represent the logical next step in our evolution, albeit an evolution guided by humans and not Darwinian selection. As a cultural and intellectual movement, transhumanism seeks to improve the human condition by developing, promoting, and disseminating technologies that significantly augment our cognitive, physical, and psychological capabilities. When I first stumbled upon the movement, the technological enablers of transhumanism were starting to come into focus: genomics, cybernetics, artificial intelligence, and nanotechnology. These tools carried the potential to radically transform our species, leading to humans with augmented intelligence and memory, unlimited lifespans, and entirely new physical and cognitive capabilities. And as a nascent Buddhist, it meant a lot to me that transhumanism held the potential to alleviate a considerable amount of suffering through the elimination of disease, infirmary, mental disorders, and the ravages of aging.

The idea that humans would transition to a posthuman state seemed both inevitable and desirable, but, having an apparently functional brain, I immediately recognized the potential for tremendous harm.

The term “transhumanism” popped into existence during the 20th century, but the idea has been around for a lot longer than that.

The quest for immortality has always been a part of our history, and it probably always will be. The Mesopotamian Epic of Gilgamesh is the earliest written example, while the Fountain of Youth—the literal Fountain of Youth—was the obsession of Spanish explorer Juan Ponce de León.

Notions that humans could somehow be modified or enhanced appeared during the European Enlightenment of the 18th century, with French philosopher Denis Diderot arguing that humans might someday redesign themselves into a multitude of types “whose future and final organic structure it’s impossible to predict,” as he wrote in D’Alembert’s Dream

The Russian cosmists of the late 19th and early 20th centuries foreshadowed modern transhumanism, as they ruminated on space travel, physical rejuvenation, immortality, and the possibility of bringing the dead back to life, the latter being a portend to cryonics—a staple of modern transhumanist thinking. From the 1920s through to the 1950s, thinkers such as British biologist J. B. S. Haldane, Irish scientist J. D. Bernal, and British biologist Julian Huxley (who popularized the term “transhumanism” in a 1957 essay) were openly advocating for such things as artificial wombs, human clones, cybernetic implants, biological enhancements, and space exploration.

It wasn’t until the 1990s, however, that a cohesive transhumanist movement emerged, a development largely brought about by—you guessed it—the internet…

[There follows a brisk and helpful history of transhumanist thought, then an account of the recent past, and present…]

Some of the transhumanist groups that emerged in the 1990s and 2000s still exist or evolved into new forms, and while a strong pro-transhumanist subculture remains, the larger public seems detached and largely disinterested. But that’s not to say that these groups, or the transhumanist movement in general, didn’t have an impact…

“I think the movements had mainly an impact as intellectual salons where blue-sky discussions made people find important issues they later dug into professionally,” said Sandberg. He pointed to Oxford University philosopher and transhumanist Nick Bostrom, who “discovered the importance of existential risk for thinking about the long-term future,” which resulted in an entirely new research direction. The Center for the Study of Existential Risk at the University of Cambridge and the Future of Humanity Institute at Oxford are the direct results of Bostrom’s work. Sandberg also cited artificial intelligence theorist Eliezer Yudkowsky, who “refined thinking about AI that led to the AI safety community forming,” and also the transhumanist “cryptoanarchists” who “did the groundwork for the cryptocurrency world,” he added. Indeed, Vitalik Buterin, a co-founder of Ethereum, subscribes to transhumanist thinking, and his father, Dmitry, used to attend our meetings at the Toronto Transhumanist Association…

Intellectual history: “What Ever Happened to the Transhumanists?,” from @dvorsky.

See also: “The Heaven of the Transhumanists” from @GenofMod (source of the image above).

Donna Haraway

###

As we muse on mortality, we might send carefully-calculated birthday greetings to Marvin Minsky; he was born on this date in 1927.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

 source

“Humans as we know them are just one morphological waypoint on the long road of evolution”*…

 

Imagine a world where the human race is no longer the dominant species.

Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.

Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.

Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.

Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).

Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.

It could be far into the future or The Day After Tomorrow.

Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.

What is the world like then? After us…

Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”

* Annalee Newitz in “When Will Humanity Finally Die Out?

###

As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

 source

 

Written by (Roughly) Daily

January 24, 2018 at 1:01 am

%d bloggers like this: