(Roughly) Daily

Posts Tagged ‘Singularity

“The best way to predict the future is to invent it”*…

A vintage futuristic car driving down a tree-lined road with a man and a woman smiling inside.

Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

… in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

– source

In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

[Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

[Image above: source]

Alan Kay

###

As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

“Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them”*…

 

… So humans won’t play a significant role in the spreading of intelligence across the cosmos. But that’s OK. Don’t think of humans as the crown of creation. Instead view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe towards higher complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago.

This is more than just another industrial revolution. This is something new that transcends humankind and even biology. It is a privilege to witness its beginnings, and contribute something to it…

Jürgen Schmidhube—  of whom it’s been said,  “When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’” — shares the reasoning behind his almost breathless anticipation of intelligence-to-come: “Falling Walls: The Past, Present and Future of Artificial Intelligence.”

Then, for a different perspective on (essentially) the same assumption about the future, read Slavoj Žižek’s “Bladerunner 2049: A View of Post-Human Capitalism.”

* Terry Pratchett, The Long Earth

###

As we welcome our computer overlords, we might recall that it was on this date in 1930 that Henry W. Jeffries invented the Rotolactor.  Housed in the Lactorium of the Walker Gordon Laboratory Company, Inc., at Plainsboro, N.J., it was a 50-stall revolving platform that enabled the milking of 1,680 cows in seven hours by rotating them into position with the milking machines.  A spiffy version of the Rotolactor, displayed at the 1939 New York World’s Fair in the Borden building as part of the “Dairy World of Tomorrow,” was one of the most popular attractions in the Fair’s Food Zone.

source

 

 

Written by (Roughly) Daily

November 13, 2017 at 1:01 am

Special Summer Cheesecake Edition…

From Flavorwire, “Vintage Photos of Rock Stars In Their Bathing Suits.”

(Special Seasonal Bonus: from Sylvia Plath and Anne Sexton to Ernest Hemingway and Scott Fitzgerald, “Take a Dip: Literary Greats In Their Bathing Suits.”)

As we reach for the Coppertone, we might might wish a soulful Happy Birthday to musician Isaac Hayes; he was born on this date in 1942.  An early stalwart at legendary Stax Records (e.g., Hayes co-wrote and played on the Sam and Dave hits “Soul Man” and “Hold On, I’m Coming”), Hayes began to come into his own after the untimely demise of Stax’s headliner, Otis Redding.  First with his album Hot Buttered Soul, then with the score– including most famously the theme– for Shaft, Hayes became a star, and a pillar of the more engaged Black music scene of the 70s.  Hayes remained a pop culture force (e.g., as the voice of Chef on South Park) until his death in 2008.  (Note:  some sources give Hayes birth date as August 20; but county records in Covington, KY, his birthplace suggest that it was the 6th.)

source

Your correspondent is headed for his ancestral seat, and for the annual parole check-in and head-lice inspection that does double duty as a family reunion.  Connectivity in that remote location being the challenged proposition that it is, these missives are likely to be in abeyance for the duration.  Regular service should resume on or about August 16.  

Meantime, lest readers be bored, a little something to ponder:

Depending who you ask, there’s a 20 to 50 percent chance that you’re living in a computer simulation. Not like The Matrix, exactly – the virtual people in that movie had real bodies, albeit suspended in weird, pod-like things and plugged into a supercomputer. Imagine instead a super-advanced version of The Sims, running on a machine with more processing power than all the minds on Earth. Intelligent design? Not necessarily. The Creator in this scenario could be a future fourth-grader working on a science project.

Oxford University philosopher Nick Bostrom argues that we may very well all be Sims. This possibility rests on three developments: (1) the aforementioned megacomputer. (2) The survival and evolution of the human race to a “posthuman” stage. (3) A decision by these posthumans to research their own evolutionary history, or simply amuse themselves, by creating us – virtual simulacra of their ancestors, with independent consciousnesses…

Read the full story– complete with a consideration of the more-immediate (and less-existentially-challenging) implications of “virtualization”– and watch the accompanying videos at Big Think… and channel your inner-Phillip K. Dick…

Y’all be good…

Leading horses to water…

… and making them drink:

from Spiked Math.

On a more serious note… many are skeptical of “the Singularity”– the hypothetical point at which technological progress will have accelerated so much that the future becomes fundamentally unpredictable and qualitatively different from what’s gone before (click here for a transcript of the talk by Vernor Vinge that launched the concept, and here for a peek at what’s become of Vernor’s initial thinking).  But even those with doubts (among whom your correspondent numbers) acknowledge that technology is re-weaving the very fabric of life.  Readers interested in a better understanding of what’s afoot and where it might lead will appreciate Kevin Kelly’s What Technology Wants (and the continuing discussion on Kevin’s site).

As we re-set our multiplication tables, we might recall that it was on this date in 1664 that natural philosopher, architect and pioneer of the Scientific Revolution Robert Hooke showed an advance copy of his book Micrographia— a chronicle of Hooke’s observations through various lens– to members of the Royal Society.  The volume (which coined the word “cell” in a biological context) went on to become the first scientific best-seller, and inspired broad interest in the new science of microscopy.

source: Cal Tech

UPDATE: Reader JR notes that the image above is of an edition of Micrographia dated 1665.  Indeed, while (per the almanac entry above) the text was previewed to the Royal Society in 1664 (to wit the letter, verso), the book wasn’t published until September, 1665.  JR remarks as well that Micrographia is in English (while most scientific books of that time were still in Latin)– a fact that no doubt contributed to its best-seller status.