(Roughly) Daily

Posts Tagged ‘Doug Engelbart

“The best way to predict the future is to invent it”*…

A vintage futuristic car driving down a tree-lined road with a man and a woman smiling inside.

Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

… in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

– source

In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

[Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

[Image above: source]

Alan Kay

###

As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

“We shape our tools and thereafter our tools shape us”*…

A late 19th C. illustration of 18th-C. people, gobsmacked by the many tech changes that have made their world irrelevant

AI is on the march, with implications, TBD, for… well, for everything. Nayef Al-Rodhan ponders its potential impact on philosophy…

Around the world, Artificial Intelligence (AI) is seeping into every aspect of our daily life, transforming our computational power, and with it the manufacturing speed, military capabilities, and the fabric of our societies. Generative AI applications such as OpenAI’s ChatGPT, the fastest growing consumer application in history, have created both positive anticipation and alarm about the future potential of AI technology. Predictions range from doomsday scenarios describing the extinction of the human species to optimistic takes on how it could revolutionise the way we work, live and communicate. If used correctly, AI could catapult scientific, economic and technological advances into a new phase in human history. In doing so it has the potential to solve some of humanity’s biggest problems by preventing serious food and water scarcitymitigating inequality and povertydiagnosing life-threatening diseases, tackling climate change, preventing pandemics, designing new game-changing proteins, and much more. 

AI technology is rapidly moving in the direction of Artificial General Intelligence (AGI), the ability to achieve human-level machine intelligence, with Google’s AI Chief recently predicting that there is a 50% chance that we’ll reach AGI within five years. This raises important questions about our human nature, our sentience, and our dignity needs. Can AI ever become truly sentient? If so, how will we know if that happens? Should sentient machines share similar rights and responsibilities as humans? The boardroom drama at OpenAI in late November 2023 also deepened the debate about the dangers of techno-capitalism: is it possible for corporate giants in the AI space to balance safety with the pursuit of revenues and profit? 

As AI advances at a breakneck speed, ethical considerations are becoming increasingly critical. Sentient AI implies that the technology has the capacity to evolve and be self-aware, in doing so feeling and experiencing the world just like a human would. According to the British mathematician Alan Turing, if the human cannot distinguish between whether it is conversing with an AI or another human, then the AI in question has passed the test. However, given AI’s sophisticated conversational skills and ability to give the impression of consciousness, the Turing Test is becoming too narrow and does not grasp all the nuances of what makes us sentient and, more broadly, human. To stay on the front foot of technological progress, we need to supplement the Turing Test with transdisciplinary frameworks for evaluating increasingly human-like AI. These frameworks should be based on approaches rooted in psychology, neuroscience, philosophy, the social sciences, political science and other relevant disciplines. 

We do not yet have a full understanding of what makes a thing sentient but transdisciplinary efforts by neuroscientists, computer scientists and philosophers are helping develop a deeper understanding of consciousness and sentience. So far, we have found that emotions are one of the important characteristics needed for sentience, as is agency or intrinsic motivation. A sentient AI would need to have the ability to create autonomous goals and an ability to pursue these goals. In human beings, this quality has evolved from our intrinsic survival instinct, while in AI it is still, for now, lacking. According to recent studies, a sense of time, narrative, and memory is also critical for determining sentience. A level of sentence comparable to humans would require autobiographical memory and a concept of the linear progression of time. In current AI systems, these capabilities are limited – but recent developments raise uncomfortable philosophical questions about whether sentient AI should share similar rights and responsibilities in the event that it becomes a reality. And if so, how does one hold the technology accountable for their actions? And how will we define – legally and ethically – sentient AI’s role in society? We currently treat AI technology and machines as property, so how will this change if they are granted their own rights? There is no clear-cut answer, but as I argued in ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’, we should attribute agency to machines whenever they appear to possess the same qualities that characterise humans. I also believe that machines ought to be treated as agents if they prove themselves to be emotional, amoral, and egoist. 

These debates, however they unfold, will clearly have deep implications on the future of philosophy itself. In ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’ I make the case that it is a short step from AI’s present capabilities to its potential future use developing novel philosophical hypotheses and thought experiments. It is therefore not unthinkable that future AI systems could break new ground in the field of normative ethics, helping pinpoint moral principles that human philosophers have failed to grasp. However, we should be mindful that their conception of morality or beauty, for example, may have nothing in common with ours, or it may supersede our own capacities and reflections. This could limit the ability of sophisticated artificial agents to answer long-standing philosophical questions, however superior they may be to the most advanced human intellectual output. We should consider how these developments are likely to impact how we understand the world around us, both in terms of the subject matter and of the theorising entity involved. Artificial agents will no doubt be put under the microscope and will be studied alongside the human mind and human nature: not just to compare and contrast, but also to understand how these artificial entities relate to – and treat – one another, and humanity itself. There is also the question of how human philosophers will react if and when AI-steered machines become superior philosophical theorisers. Will flesh and blood philosophers be forced to compete cognitively with entities whose intellectual abilities vastly supersede our own? Will AI systems overtake our limited human reasoning and reflective capacities? If this happens, what does this mean for our own human agency, the control we have over our lives and the future of our societies?…

… Powerful AI technologies will progressively increase our capabilities, for good or ill. We therefore need to be clear-sighted about the AI governance frameworks urgently needed to futureproof the safe use of AI. The recent high drama at OpenAI, whose founding mission is “to ensure that artificial general intelligence benefits all of humanity”, gave us a glimpse of the main rift in the AI industry, pitting those focused on commercial growth against those uneasy with the potential ramifications of the unbridled development of AI. However well-motivated AI governance schemes might be, they are less robust than one would hope. At the same time, self-regulation by global tech companies is becoming increasingly difficult given the large sums at stake and the economic and political influence of these companies.

With this in mind, we must keep an open mind not just about the immediate man-made dangers of AI technologies but also their potential to redefine what it means to be human. They will shape how we understand and engage with the world, in doing so making us reevaluate our place in it. Our chances of survival as a species and the likelihood of our existence in a free, independent, peaceful, prosperous, creative and dignified world will depend on the future trajectory of AI. Our historical yearning for longing and belonging hangs in the balance. To protect citizens from potential harm and limit the risks, AI should be regulated just like any other technology. We must also apply transdisciplinary approaches to make sure that the use and governance of AI is always steered by human dignity needs for all, at all times and under all circumstances. AI’s trajectory is not predetermined, but the clock is ticking and humanity may have less time than it thinks to control its collective destiny… 

Eminently worth reading in full. Whether or not one agrees with the author’s specific conclusions, his larger point– that we need to be mindful and purposive about the deployment of AI is surely well-taken: “Sentience, Safe AI and The Future of Philosophy: A Transdisciplinary Analysis,” from @SustainHistory in @oxpubphil.

See also: “Thinking About AI, Before AI Disappears” from Quentin Hardy‘s new newsletter, Technohumanism. (source of image above).

* Father John Culkin, SJ, a Professor of Communication at Fordham University (and friend of Marshall McLuhan, to whom the quote is often incorrectly attributed)

###

As we think about thinking, we might recall that it was on this date in 1979 that Apple began work on the Lisa, which would become the world’s first commercial computer with a graphical user interface.

Originally intended to sell for $2,000 and ship in 1981, the Lisa was delayed until 1983 and sold for $10,000. Utilizing technology ahead of its time, its high cost, relative lack of software, and some hardware reliability issues ultimately sank the success of the Lisa. Still, much of the technology introduced by the Lisa (itself rooted in the earlier work of Doug Engelbart [and here] and Xerox PARC) influenced the development of the Macintosh as well as other future computer and operating system designs: e.g., a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), (Ethernet) networking, file servers, print servers, and email.

The Lisa, with its development team (source)

“We shape our tools, and thereafter our tools shape us”*…

 

Bell Labs engineer Billy Klüver working on Oracle (1965), a collaboration with Robert Rauschenberg

Since it was first set-up in 1907, Bell Labs has been at the forefront of scientific invention. During its peak, work undertaken at the labs led to the invention of the laser and the transistor, the birth of information theory and the creation of C, S and C++ programming languages, which form the basis of coding today. Bell Labs has been awarded a total of eight Nobel Peace prizes and every Silicon Valley start-up or global conglomerate has mined the mythology around its unique ability to foster new ideas for clues as to how one research laboratory could consistently turn out such an array of successful technologies…

During the 1960s and 1970s… Bell Labs turned the research centre into a playground for the likes of John Cage, Robert Rauschenberg and most of New York’s Lower East Side art scene…

The extraordinary tale of EAT (Experiments in Art and Technology), engineer Billy Klüver’s attempt to “make technology more human”– at “How AT&T shaped modern art.”

Then, by way of sampling the results, check out “9 Evenings,” a 1965 project exploring avant-garde theatre, dance and new technologies. Artists John Cage, Lucinda Childs, Öyvind Fahlström, Alex Hay, Deborah Hay, Steve Paxton, Yvonne Rainer, Robert Rauschenberg, David Tudor and Robert Whitman each worked with a Bell Labs engineer to create an original performance.

(AT&T is, of course, long gone; but Bell Labs lives on as part of Nokia– and EAT continues.)

* Marshall McLuhan

###

As we celebrate collaboration, we might email elegantly and creatively designed birthday greetings to Douglas Carl Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

 source

 

Written by (Roughly) Daily

January 30, 2018 at 1:01 am

Eek, a mouse!…

Via Blognator, one of a series of “Scared Dictators” in an ad campaign for ISHR (International Society for Human Rights).

As we remember to thank Doug Engelbart, we might recall that it was on this date that it was on this date in 1888 that Herman Hollerith, a statistician who founded the company that became IBM, installed the first “computing machine” (mechanical tabulator using punched cards to rapidly sort and total statistics from millions of pieces of data) at the U.S. War Department.

Hollerith punched card