Posts Tagged ‘AI’
“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else”*…
As we contend with ‘answers” from AI’s that, with few exceptions, use source material with no credit nor recompense, we might ponder the experience of our Gilded Age ancestors…
In 1904, a widow named Elizabeth Peck had her portrait taken at a studio in a small Iowa town. The photographer sold the negatives to Duffy’s Pure Malt Whiskey, a company that avoided liquor taxes for years by falsely advertising its product as medicinal. Duffy’s ads claimed the fantastical: that it cured everything from influenza to consumption, that it was endorsed by clergymen, that it could help you live until the age of 106. The portrait of Peck ended up in one of these dubious ads, published in newspapers across the country alongside what appeared to be her unqualified praise: “After years of constant use of your Pure Malt Whiskey, both by myself and as given to patients in my capacity as nurse, I have no hesitation in recommending it.”
Duffy’s lies were numerous. Peck (misleadingly identified as “Mrs. A. Schuman”) was not a nurse, and she had not spent years constantly slinging back malt beverages. In fact, she fully abstained from alcohol. Peck never consented to the ad.
The camera’s first great age—which began in 1888 when George Eastman debuted the Kodak—is full of stories like this one. Beyond the wonders of a quickly developing art form and technology lay widespread lack of control over one’s own image, perverse incentives to make a quick buck, and generalized fear at the prospect of humiliation and the invasion of privacy…
… Early cameras required a level of technical mastery that evoked mystery—a scientific instrument understood only by professionals.
All of that changed when Eastman invented flexible roll film and debuted the first Kodak camera. Instead of developing their own pictures, customers could mail their devices to the Kodak factory and have their rolls of film developed, printed, and replaced. “You press the button,” Kodak ads promised, “we do the rest.” This leap from obscure science to streamlined service forever transformed the nature of looking and being looked at.
By 1905, less than 20 years after the first Kodak camera debuted, Eastman’s company had sold 1.2 million devices and persuaded nearly a third of the United States’ population to take up photography. Kodak’s record-setting yearly ad spending—$750,000 by the end of the 19th century (roughly $28 million in today’s dollars)—and the rapture of a technology that scratched a timeless itch facilitated the onset of a new kind of mass exposure…
…
… Though newspapers across the country cautioned Americans to “beware the Kodak,” as the cameras were “deadly weapons” and “deadly little boxes,” many were also primary facilitators of the craze. The perfection of halftone printing coincided with the rise of the Kodak and allowed for the mass circulation of images. Newly empowered, newspapers regularly published paparazzi pictures of famous people taken without their knowledge, paying twice as much for them as they did for consensual photos taken in a studio.
Lawmakers and judges responded to the crisis clumsily. Suing for libel was usually the only remedy available to the overexposed. But libel law did not protect against your likeness being taken or used without your permission unless the violation was also defamatory in some way. Though results were middling, one failed lawsuit gained enough notoriety to channel cross-class feelings of exposure into action. A teenage girl named Abigail Roberson noticed her face on a neighbor’s bag of flour, only to learn that the Franklin Mills Flour Company had used her likeness in an ad that had been plastered 25,000 times all over her hometown.
After suffering intense shock and being temporarily bedridden, she sued. In 1902, the New York Court of Appeals rejected her claims and held that the right to privacy did not exist in common law. It based its decision in part on the assertion that the image was not libelous; Chief Justice Alton B. Parker wrote that the photo was “a very good one” that others might even regard as a “compliment to their beauty.” The humiliation, the lack of control over her own image, the unwanted fame—none of that amounted to any sort of actionable claim.
Public outcry at the decision reached a fever pitch, and newspapers filled their pages with editorial indignation. In its first legislative session following the court’s decision and the ensuing outrage, the New York state legislature made history by adopting a narrow “right to privacy,” which prohibited the use of someone’s likeness in advertising or trade without their written consent. Soon after, the Supreme Court of Georgia became the first to recognize this category of privacy claim. Eventually, just about every state court in the country followed Georgia’s lead. The early uses and abuses of the Kodak helped cobble together a right that centered on profiting from the exploitation of someone’s likeness, rather than the exploitation itself.
Not long after asserting that no right to privacy exists in common law, and while campaigning to be the Democratic nominee for president, Parker told the Associated Press, “I reserve the right to put my hands in my pockets and assume comfortable attitudes without being everlastingly afraid that I shall be snapped by some fellow with a camera.” Roberson publicly took him to task over his hypocrisy, writing, “I take this opportunity to remind you that you have no such right.” She was correct then, and she still would be today. The question of whether anyone has the right to be free from exposure and its many humiliations lingers, intensified but unresolved. The law—that reactive, slow thing—never quite catches up to technology, whether it’s been given one year or 100…
Early photographers sold their snapshots to advertisers, who reused the individuals’ likenesses without their permission: “How the Rise of the Camera Launched a Fight to Protect Gilded Age Americans’ Privacy,” from @myHNN and @SmithsonianMag.
The parallels with AI usage issues are obvious. For an example of a step in the right direction, see Tim O’Reilly‘s “How to Fix “AI’s Original Sin”
* David Brin
###
As we ponder the personal, we might recall that it was on this date in 1789 that partisans of the Third Estate, impatient for social and legal reforms (and economic relief) in France, attacked and took control of the Bastille. A fortress in Paris, the Bastille was a medieval armory and political prison; while it held only 8 inmates at the time, it resonated with the crowd as a symbol of the monarchy’s abuse of power. Its fall ignited the French Revolution. This date is now observed annually as France’s National Day.
See the estimable Robert Darnton’s “What Was Revolutionary about the French Revolution?“
Happy Bastille Day!

“When the going gets weird, the weird turn pro”*…
But, Ammon Haggerty suggests, when it comes to AI, “going pro” is at least a waste and quite possibly a problem…
Kyle Turman, creative technologist and staff designer at Anthropic, shared a sentiment that resonated deeply. He said (paraphrasing), “AI is actually really weird, and I don’t think people appreciate that enough.” This sparked my question to the panel: Are we at risk of sanitizing AI’s inherent strangeness?
What followed was a fascinating discussion with a couple of friends, Mickey McManus and Noteh Krauss, who were also in attendance. They both recognized the deeper question I was asking — the slippery slope of “cleansing” foundation AI models of all that is undesirable. LLMs are a reflection of humanity, albeit at the moment primarily American and white-ish, with all our weird and idiosyncratic quirks that make us human. There is a real danger that we could see foundation models trained to maximize business values (of the American capitalist variety) and suppress radical and non-conforming ideas — a sort of revisionist optimization.
All this got me thinking about San Francisco, the city I grew up in, and where my dad, grandfather and great-grandfather called home. SF has been “weird” since the gold rush, attracting a melting pot of non-conformists, risk-takers, and radicals. Over generations, the weirdness of SF has ebbed and flowed, but it’s now deeply engrained in the culture. The bohemians, the beats, the hippies, LGBTQ+ rights movement, tech counterculture, and now AI. These are movements born out of counterculture and unconventional thinking, resulting in a disruption of established social and business norms. Eventually leading to mainstreaming, and the cycle repeats. Growing up in San Francisco, I’ve witnessed firsthand how this cycle of weirdness and innovation has shaped the city. It’s a living testament to the power of unconventional thinking.
Like San Francisco, AI also has a fairly long history of being weird. Early experiments in AI such as AARON (1972), which trained a basic model on artistic decision-making, created outsider art-like compositions. Racter (1984) was an early text-generating AI that would often produce dreamlike or surrealist output. “More than iron, more than lead, more than gold I need electricity. I need it more than I need lamb or pork or lettuce or cucumber. I need it for my dreams.” More recently, Google Deep Dream (2015), a convolutional neural network that looks for patterns found in its training data, producing hallucination-like images and videos.
These “edge states” in AI’s evolution are, to me, the most interesting, and human, expressions. It’s a similar edge state explored in human creativity. It’s called “liminal space” — the threshold between reality and imagination. What’s really interesting is the mental process of extracting meaning from the liminal space is highly analogous to how the transformer architecture used in LLMs work. In the human brain, we look for patterns, then synthesize new idea and information, find unexpected connections, contextualize the findings, then articulate the ideas into words we can express. In transformers, the attention mechanism looks for patterns, then neural networks “synthesize” the information, then through iteration and prioritization, form probabilistic insights, then positional encoding maps the information to the broader context, and last, articulates the output as a best guess based on what it knows previously. Sorry if that was dense — for nerd friends to either validate or challenge.
This is all to say that I feel there’s something really interesting in the liminal space for AI. Also known as “AI hallucinations” and it’s not good — very bad! I agree that when you ask an AI an important question, and it gives a made-up answer, it’s not a good thing. But it’s not making things up, it’s just synthesizing a highly probable answer from an ambiguous cloud of understanding (question, data, meaning, etc.). I say, let’s explore and celebrate this analog of human creativity. What if, instead of fearing AI’s ‘hallucinations,’ we embraced them as digital dreams?…
… While I’ve been vocal about AI’s ethical challenges for creators (1) (2), I’m deeply inspired by the creative potential of these new tools. I also fear some of the most interesting parts could begin to disappear…
A plea to “Keep AI Weird.”
How weird could things get? Matt Webb (@genmon) observes that “The Overton window of weirdness is opening.”
* Hunter S. Thompson
###
As we engage the edges, we might recall that it was on this date in 1991 that Terminator 2: Judgment Day was released. It focuses on the struggle, fought both in future and in the present, between a “synthetic intelligence” known as Skynet, and a surviving resistance of humans led by John Connor. Picking up some years after the action in The Terminator (in which robots fail to prevent John Connor from being born), they try again in 1995, this time attempting to terminate him as a child by using a more advanced Terminator, the T-1000. As before, John sends back a protector for his younger self, a reprogrammed Terminator, who is a doppelgänger to the one from 1984.
The Terminator was a success; Terminator 2 was a smash– a success both with critics and at the box office, grossing $523.7 million worldwide. It won several Academy Awards, perhaps most notably for its then-cutting-edge computer animation.
“Man is not disturbed by events, but by the view he takes of them”*…
From Stripe Partners, a framework for rethinking the way we talk about the AI future…
AI is both a new technology and a new type of technology. It is the first technology that learns and that has the potential to outstrip its makers’ capabilities and develop independently.
As Large Language Models bring to life the realities of AI’s potential to operate at unprecedented, ‘human’ levels of sophistication, projections about its future have gained urgency. The dominant framework being applied to identify AI’s potential futures is 165 years old: Charles Darwin’s theory of evolution.
Darwin’s evolutionary framework is rendered most clearly in Dan Hendycks work for the Center for AI Safety which posits a future where natural selection could cause the most influential future AI agents to have selfish tendencies that might see AI’s favour their own agendas over the safety of humankind.
The choice of Natural Selection as a framework makes sense given AI’s emerging status as a quasi-sentient, highly adaptive technology that can learn and grow. The choice is a response to the limitations inherent in existing models for technological adoption which treat technologies as inert tools that only come to life when used by people.
The risk in applying this lens to AI is that it goes too far in assigning independent agency to AI. Estimates on the timing of the emergence of ‘Artificial General Intelligence’ vary, but spending some time with the current crop of Generative AI platforms confirms the view that AI’s with intelligences that are closer to humans are some way off. In the interim using natural selection as a lens to understand AI positions humans as further out of the developmental loop than is actually the case. Competitive forces whether market or military will shape AI’s development, but these will not be the only forces at play and direct interaction with humans will be the principal driver for AI’s progress in the near term.
A year ago we wrote about the opportunity to reframe the impact of AI on organisations through the lens of Actor Network Theory (ANT). More than a singular theory, ANT describes an approach to studying social and technological systems developed by Bruno Latour, Michel Callon, Madeleine Akrich and John Law in the early 1980s.
ANT posits that the social and natural world is best understood as dynamic networks of humans and nonhuman actors… In our 2023 piece we suggested that ANT, with its focus on framing society and human-technology interactions in terms of dynamic networks where every actor whether human or machine impacts the network, was a useful way of exploring the ways in which AI will impact people, and people will impact AI.
A year on the value of ANT as a framework for exploring AI’s future has become clearer. The critical point when comparing an ANT frame to an evolutionary one is the way in which the ANT framing highlights how AI will progress with and through people’s interactions with it. When viewed as an actor in a network, not a technology in isolation, AI will never be separate from human interventions…
A provocative argument, well worth reading in full: “Why the debate about the future of AI needs less Darwin and more Latour,” from @stripepartners.
Apposite: “Whose risks? Whose benefits?” from Mandy Brown.
* Epictetus
###
As we reframe, we might recall that it was on this date in 1946 that an ancestor of today’s AIs, the ENIAC (Electronic Numerical Integrator And Computer), was first demonstrated in operation. (It was announced to the public the following day.) The first general-purpose computer (Turing-complete, digital, and capable of being programmed and re-programmed to solve different problems), ENIAC was begun in 1943, as part of the U.S’s war effort (as a classified military project known as “Project PX“); it was conceived and designed by John Mauchly and Presper Eckert of the University of Pennsylvania, where it was built. The finished machine, composed of 17,468 electronic vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints, weighed more than 27 tons and occupied a 30 x 50 foot room– in its time the largest single electronic apparatus in the world. ENIAC’s basic clock speed was 100,000 cycles per second (or Hertz). Today’s home computers have clock speeds of 3,500,000,000 cycles per second or more.









You must be logged in to post a comment.