(Roughly) Daily

Posts Tagged ‘AI

“We shape our tools and thereafter our tools shape us”*…

A late 19th C. illustration of 18th-C. people, gobsmacked by the many tech changes that have made their world irrelevant

AI is on the march, with implications, TBD, for… well, for everything. Nayef Al-Rodhan ponders its potential impact on philosophy…

Around the world, Artificial Intelligence (AI) is seeping into every aspect of our daily life, transforming our computational power, and with it the manufacturing speed, military capabilities, and the fabric of our societies. Generative AI applications such as OpenAI’s ChatGPT, the fastest growing consumer application in history, have created both positive anticipation and alarm about the future potential of AI technology. Predictions range from doomsday scenarios describing the extinction of the human species to optimistic takes on how it could revolutionise the way we work, live and communicate. If used correctly, AI could catapult scientific, economic and technological advances into a new phase in human history. In doing so it has the potential to solve some of humanity’s biggest problems by preventing serious food and water scarcitymitigating inequality and povertydiagnosing life-threatening diseases, tackling climate change, preventing pandemics, designing new game-changing proteins, and much more. 

AI technology is rapidly moving in the direction of Artificial General Intelligence (AGI), the ability to achieve human-level machine intelligence, with Google’s AI Chief recently predicting that there is a 50% chance that we’ll reach AGI within five years. This raises important questions about our human nature, our sentience, and our dignity needs. Can AI ever become truly sentient? If so, how will we know if that happens? Should sentient machines share similar rights and responsibilities as humans? The boardroom drama at OpenAI in late November 2023 also deepened the debate about the dangers of techno-capitalism: is it possible for corporate giants in the AI space to balance safety with the pursuit of revenues and profit? 

As AI advances at a breakneck speed, ethical considerations are becoming increasingly critical. Sentient AI implies that the technology has the capacity to evolve and be self-aware, in doing so feeling and experiencing the world just like a human would. According to the British mathematician Alan Turing, if the human cannot distinguish between whether it is conversing with an AI or another human, then the AI in question has passed the test. However, given AI’s sophisticated conversational skills and ability to give the impression of consciousness, the Turing Test is becoming too narrow and does not grasp all the nuances of what makes us sentient and, more broadly, human. To stay on the front foot of technological progress, we need to supplement the Turing Test with transdisciplinary frameworks for evaluating increasingly human-like AI. These frameworks should be based on approaches rooted in psychology, neuroscience, philosophy, the social sciences, political science and other relevant disciplines. 

We do not yet have a full understanding of what makes a thing sentient but transdisciplinary efforts by neuroscientists, computer scientists and philosophers are helping develop a deeper understanding of consciousness and sentience. So far, we have found that emotions are one of the important characteristics needed for sentience, as is agency or intrinsic motivation. A sentient AI would need to have the ability to create autonomous goals and an ability to pursue these goals. In human beings, this quality has evolved from our intrinsic survival instinct, while in AI it is still, for now, lacking. According to recent studies, a sense of time, narrative, and memory is also critical for determining sentience. A level of sentence comparable to humans would require autobiographical memory and a concept of the linear progression of time. In current AI systems, these capabilities are limited – but recent developments raise uncomfortable philosophical questions about whether sentient AI should share similar rights and responsibilities in the event that it becomes a reality. And if so, how does one hold the technology accountable for their actions? And how will we define – legally and ethically – sentient AI’s role in society? We currently treat AI technology and machines as property, so how will this change if they are granted their own rights? There is no clear-cut answer, but as I argued in ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’, we should attribute agency to machines whenever they appear to possess the same qualities that characterise humans. I also believe that machines ought to be treated as agents if they prove themselves to be emotional, amoral, and egoist. 

These debates, however they unfold, will clearly have deep implications on the future of philosophy itself. In ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’ I make the case that it is a short step from AI’s present capabilities to its potential future use developing novel philosophical hypotheses and thought experiments. It is therefore not unthinkable that future AI systems could break new ground in the field of normative ethics, helping pinpoint moral principles that human philosophers have failed to grasp. However, we should be mindful that their conception of morality or beauty, for example, may have nothing in common with ours, or it may supersede our own capacities and reflections. This could limit the ability of sophisticated artificial agents to answer long-standing philosophical questions, however superior they may be to the most advanced human intellectual output. We should consider how these developments are likely to impact how we understand the world around us, both in terms of the subject matter and of the theorising entity involved. Artificial agents will no doubt be put under the microscope and will be studied alongside the human mind and human nature: not just to compare and contrast, but also to understand how these artificial entities relate to – and treat – one another, and humanity itself. There is also the question of how human philosophers will react if and when AI-steered machines become superior philosophical theorisers. Will flesh and blood philosophers be forced to compete cognitively with entities whose intellectual abilities vastly supersede our own? Will AI systems overtake our limited human reasoning and reflective capacities? If this happens, what does this mean for our own human agency, the control we have over our lives and the future of our societies?…

… Powerful AI technologies will progressively increase our capabilities, for good or ill. We therefore need to be clear-sighted about the AI governance frameworks urgently needed to futureproof the safe use of AI. The recent high drama at OpenAI, whose founding mission is “to ensure that artificial general intelligence benefits all of humanity”, gave us a glimpse of the main rift in the AI industry, pitting those focused on commercial growth against those uneasy with the potential ramifications of the unbridled development of AI. However well-motivated AI governance schemes might be, they are less robust than one would hope. At the same time, self-regulation by global tech companies is becoming increasingly difficult given the large sums at stake and the economic and political influence of these companies.

With this in mind, we must keep an open mind not just about the immediate man-made dangers of AI technologies but also their potential to redefine what it means to be human. They will shape how we understand and engage with the world, in doing so making us reevaluate our place in it. Our chances of survival as a species and the likelihood of our existence in a free, independent, peaceful, prosperous, creative and dignified world will depend on the future trajectory of AI. Our historical yearning for longing and belonging hangs in the balance. To protect citizens from potential harm and limit the risks, AI should be regulated just like any other technology. We must also apply transdisciplinary approaches to make sure that the use and governance of AI is always steered by human dignity needs for all, at all times and under all circumstances. AI’s trajectory is not predetermined, but the clock is ticking and humanity may have less time than it thinks to control its collective destiny… 

Eminently worth reading in full. Whether or not one agrees with the author’s specific conclusions, his larger point– that we need to be mindful and purposive about the deployment of AI is surely well-taken: “Sentience, Safe AI and The Future of Philosophy: A Transdisciplinary Analysis,” from @SustainHistory in @oxpubphil.

See also: “Thinking About AI, Before AI Disappears” from Quentin Hardy‘s new newsletter, Technohumanism. (source of image above).

* Father John Culkin, SJ, a Professor of Communication at Fordham University (and friend of Marshall McLuhan, to whom the quote is often incorrectly attributed)

###

As we think about thinking, we might recall that it was on this date in 1979 that Apple began work on the Lisa, which would become the world’s first commercial computer with a graphical user interface.

Originally intended to sell for $2,000 and ship in 1981, the Lisa was delayed until 1983 and sold for $10,000. Utilizing technology ahead of its time, its high cost, relative lack of software, and some hardware reliability issues ultimately sank the success of the Lisa. Still, much of the technology introduced by the Lisa (itself rooted in the earlier work of Doug Engelbart [and here] and Xerox PARC) influenced the development of the Macintosh as well as other future computer and operating system designs: e.g., a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), (Ethernet) networking, file servers, print servers, and email.

The Lisa, with its development team (source)

“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else”*…

As we contend with ‘answers” from AI’s that, with few exceptions, use source material with no credit nor recompense, we might ponder the experience of our Gilded Age ancestors…

In 1904, a widow named Elizabeth Peck had her portrait taken at a studio in a small Iowa town. The photographer sold the negatives to Duffy’s Pure Malt Whiskey, a company that avoided liquor taxes for years by falsely advertising its product as medicinal. Duffy’s ads claimed the fantastical: that it cured everything from influenza to consumption, that it was endorsed by clergymen, that it could help you live until the age of 106. The portrait of Peck ended up in one of these dubious ads, published in newspapers across the country alongside what appeared to be her unqualified praise: “After years of constant use of your Pure Malt Whiskey, both by myself and as given to patients in my capacity as nurse, I have no hesitation in recommending it.”

Duffy’s lies were numerous. Peck (misleadingly identified as “Mrs. A. Schuman”) was not a nurse, and she had not spent years constantly slinging back malt beverages. In fact, she fully abstained from alcohol. Peck never consented to the ad.

The camera’s first great age—which began in 1888 when George Eastman debuted the Kodak—is full of stories like this one. Beyond the wonders of a quickly developing art form and technology lay widespread lack of control over one’s own image, perverse incentives to make a quick buck, and generalized fear at the prospect of humiliation and the invasion of privacy…

… Early cameras required a level of technical mastery that evoked mystery—a scientific instrument understood only by professionals.

All of that changed when Eastman invented flexible roll film and debuted the first Kodak camera. Instead of developing their own pictures, customers could mail their devices to the Kodak factory and have their rolls of film developed, printed, and replaced. “You press the button,” Kodak ads promised, “we do the rest.” This leap from obscure science to streamlined service forever transformed the nature of looking and being looked at.

By 1905, less than 20 years after the first Kodak camera debuted, Eastman’s company had sold 1.2 million devices and persuaded nearly a third of the United States’ population to take up photography. Kodak’s record-setting yearly ad spending—$750,000 by the end of the 19th century (roughly $28 million in today’s dollars)—and the rapture of a technology that scratched a timeless itch facilitated the onset of a new kind of mass exposure…

… Though newspapers across the country cautioned Americans to “beware the Kodak,” as the cameras were “deadly weapons” and “deadly little boxes,” many were also primary facilitators of the craze. The perfection of halftone printing coincided with the rise of the Kodak and allowed for the mass circulation of images. Newly empowered, newspapers regularly published paparazzi pictures of famous people taken without their knowledge, paying twice as much for them as they did for consensual photos taken in a studio.

Lawmakers and judges responded to the crisis clumsily. Suing for libel was usually the only remedy available to the overexposed. But libel law did not protect against your likeness being taken or used without your permission unless the violation was also defamatory in some way. Though results were middling, one failed lawsuit gained enough notoriety to channel cross-class feelings of exposure into action. A teenage girl named Abigail Roberson noticed her face on a neighbor’s bag of flour, only to learn that the Franklin Mills Flour Company had used her likeness in an ad that had been plastered 25,000 times all over her hometown.

After suffering intense shock and being temporarily bedridden, she sued. In 1902, the New York Court of Appeals rejected her claims and held that the right to privacy did not exist in common law. It based its decision in part on the assertion that the image was not libelous; Chief Justice Alton B. Parker wrote that the photo was “a very good one” that others might even regard as a “compliment to their beauty.” The humiliation, the lack of control over her own image, the unwanted fame—none of that amounted to any sort of actionable claim.

Public outcry at the decision reached a fever pitch, and newspapers filled their pages with editorial indignation. In its first legislative session following the court’s decision and the ensuing outrage, the New York state legislature made history by adopting a narrow “right to privacy,” which prohibited the use of someone’s likeness in advertising or trade without their written consent. Soon after, the Supreme Court of Georgia became the first to recognize this category of privacy claim. Eventually, just about every state court in the country followed Georgia’s lead. The early uses and abuses of the Kodak helped cobble together a right that centered on profiting from the exploitation of someone’s likeness, rather than the exploitation itself.

Not long after asserting that no right to privacy exists in common law, and while campaigning to be the Democratic nominee for president, Parker told the Associated Press, “I reserve the right to put my hands in my pockets and assume comfortable attitudes without being everlastingly afraid that I shall be snapped by some fellow with a camera.” Roberson publicly took him to task over his hypocrisy, writing, “I take this opportunity to remind you that you have no such right.” She was correct then, and she still would be today. The question of whether anyone has the right to be free from exposure and its many humiliations lingers, intensified but unresolved. The law—that reactive, slow thing—never quite catches up to technology, whether it’s been given one year or 100…

Early photographers sold their snapshots to advertisers, who reused the individuals’ likenesses without their permission: “How the Rise of the Camera Launched a Fight to Protect Gilded Age Americans’ Privacy,” from @myHNN and @SmithsonianMag.

The parallels with AI usage issues are obvious. For an example of a step in the right direction, see Tim O’Reilly‘s “How to Fix “AI’s Original Sin

* David Brin

###

As we ponder the personal, we might recall that it was on this date in 1789 that partisans of the Third Estate, impatient for social and legal reforms (and economic relief) in France, attacked and took control of the Bastille.  A fortress in Paris, the Bastille was a medieval armory and political prison; while it held only 8 inmates at the time, it resonated with the crowd as a symbol of the monarchy’s abuse of power.  Its fall ignited the French Revolution.  This date is now observed annually as France’s National Day.

See the estimable Robert Darnton’s “What Was Revolutionary about the French Revolution?

Happy Bastille Day!

300px-Prise_de_la_Bastille
Storming of The Bastille, Jean-Pierre Houël

source

Written by (Roughly) Daily

July 14, 2024 at 1:00 am

“When the going gets weird, the weird turn pro”*…

California has long been an epicenter of weird…

But, Ammon Haggerty suggests, when it comes to AI, “going pro” is at least a waste and quite possibly a problem…

Kyle Turman, creative technologist and staff designer at Anthropic, shared a sentiment that resonated deeply. He said (paraphrasing), “AI is actually really weird, and I don’t think people appreciate that enough.” This sparked my question to the panel: Are we at risk of sanitizing AI’s inherent strangeness?

What followed was a fascinating discussion with a couple of friends, Mickey McManus and Noteh Krauss, who were also in attendance. They both recognized the deeper question I was asking — the slippery slope of “cleansing” foundation AI models of all that is undesirable. LLMs are a reflection of humanity, albeit at the moment primarily American and white-ish, with all our weird and idiosyncratic quirks that make us human. There is a real danger that we could see foundation models trained to maximize business values (of the American capitalist variety) and suppress radical and non-conforming ideas — a sort of revisionist optimization.

All this got me thinking about San Francisco, the city I grew up in, and where my dad, grandfather and great-grandfather called home. SF has been “weird” since the gold rush, attracting a melting pot of non-conformists, risk-takers, and radicals. Over generations, the weirdness of SF has ebbed and flowed, but it’s now deeply engrained in the culture. The bohemians, the beats, the hippies, LGBTQ+ rights movement, tech counterculture, and now AI. These are movements born out of counterculture and unconventional thinking, resulting in a disruption of established social and business norms. Eventually leading to mainstreaming, and the cycle repeats. Growing up in San Francisco, I’ve witnessed firsthand how this cycle of weirdness and innovation has shaped the city. It’s a living testament to the power of unconventional thinking.

Like San Francisco, AI also has a fairly long history of being weird. Early experiments in AI such as AARON (1972), which trained a basic model on artistic decision-making, created outsider art-like compositions. Racter (1984) was an early text-generating AI that would often produce dreamlike or surrealist output. “More than iron, more than lead, more than gold I need electricity. I need it more than I need lamb or pork or lettuce or cucumber. I need it for my dreams.” More recently, Google Deep Dream (2015), a convolutional neural network that looks for patterns found in its training data, producing hallucination-like images and videos.

These “edge states” in AI’s evolution are, to me, the most interesting, and human, expressions. It’s a similar edge state explored in human creativity. It’s called “liminal space” — the threshold between reality and imagination. What’s really interesting is the mental process of extracting meaning from the liminal space is highly analogous to how the transformer architecture used in LLMs work. In the human brain, we look for patterns, then synthesize new idea and information, find unexpected connections, contextualize the findings, then articulate the ideas into words we can express. In transformers, the attention mechanism looks for patterns, then neural networks “synthesize” the information, then through iteration and prioritization, form probabilistic insights, then positional encoding maps the information to the broader context, and last, articulates the output as a best guess based on what it knows previously. Sorry if that was dense — for nerd friends to either validate or challenge.

This is all to say that I feel there’s something really interesting in the liminal space for AI. Also known as “AI hallucinations” and it’s not good — very bad! I agree that when you ask an AI an important question, and it gives a made-up answer, it’s not a good thing. But it’s not making things up, it’s just synthesizing a highly probable answer from an ambiguous cloud of understanding (question, data, meaning, etc.). I say, let’s explore and celebrate this analog of human creativity. What if, instead of fearing AI’s ‘hallucinations,’ we embraced them as digital dreams?…

… While I’ve been vocal about AI’s ethical challenges for creators (1) (2), I’m deeply inspired by the creative potential of these new tools. I also fear some of the most interesting parts could begin to disappear…

A plea to “Keep AI Weird.”

How weird could things get? Matt Webb (@genmon) observes that “The Overton window of weirdness is opening.”

* Hunter S. Thompson

###

As we engage the edges, we might recall that it was on this date in 1991 that Terminator 2: Judgment Day was released. It focuses on the struggle, fought both in future and in the present, between a “synthetic intelligence” known as Skynet, and a surviving resistance of humans led by John Connor. Picking up some years after the action in The Terminator (in which robots fail to prevent John Connor from being born), they try again in 1995, this time attempting to terminate him as a child by using a more advanced Terminator, the T-1000. As before, John sends back a protector for his younger self, a reprogrammed Terminator, who is a doppelgänger to the one from 1984.

The Terminator was a success; Terminator 2 was a smash– a success both with critics and at the box office, grossing $523.7 million worldwide. It won several Academy Awards, perhaps most notably for its then-cutting-edge computer animation.

source

Written by (Roughly) Daily

July 1, 2024 at 1:00 am

“Somebody gets into trouble, then gets out of it again. People love that story. They never get tired of it.”*…

Kurt Vonnegut took an early interest in what he considered the fundamental “shapes” of stories…

Stories have very simple shapes, ones that computers can understand.

This was the basic idea behind the master’s thesis that Kurt Vonnegut submitted to the anthropology department at the University of Chicago. It was rejected, however, “because it was so simple and looked like too much fun,” Vonnegut said…

Kurt Vonnegut on the 8 ‘shapes’ of stories

He never abandoned the idea. Years later, in a 2004 lecture at Case Western University, he shared his theory– a recording of which has washed around the internet ever sense…

Now, a group of academics have used AI to analyze hundreds of published stories, and have confirmed Vonnegut’s contention (sort of: he argued for 8 “shapes”; they found 6), as they explain in the abstract of their paper…

Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a ‘big data’ lens. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories and forming patterns that are meaningful to us. Here, by classifying the emotional arcs for a filtered subset of 1,327 stories from Project Gutenberg’s fiction collection, we find a set of six core emotional arcs which form the essential building blocks of complex emotional trajectories. We strengthen our findings by separately applying matrix decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads…

The emotional arcs of stories are dominated by six basic shapes” (where you can read/download the full paper).

* Kurt Vonnegut

###

As we agnize archetypes, we might spare a thought for a master of the entertaining tale, Jean-Baptiste Poquelin (or as he’s better known by his stage name, Molière); he died on this date in 1673. A playwright, actor, and poet, he is widely regarded as one of the greatest writers in world history, and possibly the greatest writer in French history. His extant works include comedies, farces, tragicomedies, comédie-ballets, and more. His plays have been translated into every major living language and are performed at the Comédie-Française more often than those of any other playwright today.  His influence is such that the French language is often referred to as the “language of Molière.”

source

Written by (Roughly) Daily

February 17, 2024 at 1:00 am

“Man is not disturbed by events, but by the view he takes of them”*…

From Stripe Partners, a framework for rethinking the way we talk about the AI future…

AI is both a new technology and a new type of technology. It is the first technology that learns and that has the potential to outstrip its makers’ capabilities and develop independently.

As Large Language Models bring to life the realities of AI’s potential to operate at unprecedented, ‘human’ levels of sophistication, projections about its future have gained urgency. The dominant framework being applied to identify AI’s potential futures is 165 years old: Charles Darwin’s theory of evolution.

Darwin’s evolutionary framework is rendered most clearly in Dan Hendycks work for the Center for AI Safety which posits a future where natural selection could cause the most influential future AI agents to have selfish tendencies that might see AI’s favour their own agendas over the safety of humankind.

The choice of Natural Selection as a framework makes sense given AI’s emerging status as a quasi-sentient, highly adaptive technology that can learn and grow. The choice is a response to the limitations inherent in existing models for technological adoption which treat technologies as inert tools that only come to life when used by people.

The risk in applying this lens to AI is that it goes too far in assigning independent agency to AI. Estimates on the timing of the emergence of ‘Artificial General Intelligence’ vary, but spending some time with the current crop of Generative AI platforms confirms the view that AI’s with intelligences that are closer to humans are some way off. In the interim using natural selection as a lens to understand AI positions humans as further out of the developmental loop than is actually the case. Competitive forces whether market or military will shape AI’s development, but these will not be the only forces at play and direct interaction with humans will be the principal driver for AI’s progress in the near term.

A year ago we wrote about the opportunity to reframe the impact of AI on organisations through the lens of Actor Network Theory (ANT). More than a singular theory, ANT describes an approach to studying social and technological systems developed by Bruno Latour, Michel Callon, Madeleine Akrich and John Law in the early 1980s. 

ANT posits that the social and natural world is best understood as dynamic networks of humans and nonhuman actors… In our 2023 piece we suggested that ANT, with its focus on framing society and human-technology interactions in terms of dynamic networks where every actor whether human or machine impacts the network, was a useful way of exploring the ways in which AI will impact people, and people will impact AI.

A year on the value of ANT as a framework for exploring AI’s future has become clearer. The critical point when comparing an ANT frame to an evolutionary one is the way in which the ANT framing highlights how AI will progress with and through people’s interactions with it. When viewed as an actor in a network, not a technology in isolation, AI will never be separate from human interventions…

A provocative argument, well worth reading in full: “Why the debate about the future of AI needs less Darwin and more Latour,” from @stripepartners.

Apposite: “Whose risks? Whose benefits?” from Mandy Brown.

* Epictetus

###

As we reframe, we might recall that it was on this date in 1946 that an ancestor of today’s AIs, the ENIAC (Electronic Numerical Integrator And Computer), was first demonstrated in operation.  (It was announced to the public the following day.) The first general-purpose computer (Turing-complete, digital, and capable of being programmed and re-programmed to solve different problems), ENIAC was begun in 1943, as part of the U.S’s war effort (as a classified military project known as “Project PX“); it was conceived and designed by John Mauchly and Presper Eckert of the University of Pennsylvania, where it was built.  The finished machine, composed of 17,468 electronic vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints, weighed more than 27 tons and occupied a 30 x 50 foot room– in its time the largest single electronic apparatus in the world.  ENIAC’s basic clock speed was 100,000 cycles per second (or Hertz). Today’s home computers have clock speeds of 3,500,000,000 cycles per second or more.

source