(Roughly) Daily

Posts Tagged ‘Quentin Hardy

“They are strange times, times of beginnings and endings. Dangerous and powerful. And we feel it even if we don’t know what it is.”*…

A digital art representation of a humanoid figure with a fragmented head, featuring abstract geometric shapes and colors emerging from one side against a dark background.

Back in 2019, (R)D considered a piece from the remarkable Freeman Dyson on what the biotech revolution could mean (itself further to thoughts in an earlier piece of his). Those thoughts popped back into my mind when I read Quentin Hardy‘s recent recounting of his lunch with a friend…

We’re at an outdoor table in Mission Bay, the wet tech hotspot of San Francisco, home to Biopharma, Biotech, and Techbio research labs, known and emerging, plus big hospitals and research outfits.

Across from my salad and his sandwich, Ashlee sweeps his arm in a big arc across Long Bridge Street, towards all the residential and mixed-use buildings.

“There’s dozens of tiny labs up there,” he says, “somebody’s got a mouse, they’re doing something – growing organs, playing with neurons, injecting them with a virus to change their genetics. All kinds of weird shit. It’s wild, man.”

“All kinds of weird shit” and Ashlee have been intimates for years. They have been good to each other. We met around the time computers started moving from the closet to the cloud, and we both wrote about dirt-cheap satellites, and how cell phone guts were ending up in strange places, changing our world with cheap drones and voluminous data. Back when everything really started changing.

I went to Google to write about how those really big data sets and massive amounts of cloud computation were enabling Artificial Intelligence. Ashlee wrote the first biography of Elon Musk, which took him into Musk’s interests in non-governmental rocketry and neural implants. For many years Bloomberg paid him to do a show called Hello World, where he covered Doomsday preppers, fake meat, Nigerian hackers, and all kinds of strange things. All the creative journalists were jealous of him, not least because they couldn’t touch his talent for finding and admiring this abundance of exotic invention.

He now has his own show, Core Memory, which has unsurpassed reporting on all sorts of cutting-edge robotics, life reprogrammers, amateur space stations, body hackers, and new materials manufacturers. Highly recommended.

Back to his current interests. “I know this guy who’s harvesting rat neurons,” he says, “he talks about using them to power data centers.”…

… We start talking about biohacking and self-medication, all the people shooting up peptides, and the places around town where the kids are mixing their AI with their biohacking, and all the quasi-legal stuff people are doing, growing new human and animal parts.

On one level, they’re just following the “lots of data, lots of compute” model, only into the infinitely more complex wet world. Just as enough people posted tagged photos online to enable Fei Fei Li to make and exploit ImageNet, a major milestone in the creation of image-recognition AI, so these new hackers hope to tag, track, remix and scan enough biological data to remake biological understanding. And capability.

I’ve got my kale and he’s got his meat, partly liberated from the bread. Some of the fun in hanging out with Ashlee is the way we can free-associate over years of covering this kind of stuff, knowing that some things blow up and some things don’t work out, good ideas go down while the mad and the lucky are proclaimed geniuses. In other words, we get to bullshit about the weird shit.

“Maybe it’s going to turn into some kind of ghost gun thing, where people take drugs and perform genetic procedures that are legal on their own and turn it into some kind of illegal treatment,” I say. “You’ll go on a luxury cruise into international waters to get your genetic makeup altered, or blend two different animals into a third. Like ‘The Floating Offshore Platform of Dr. Moreau,’” after the H.G. Wells’ story about a mad scientist making human-animal hybrids…

… But we’re also talking about Biology, that most intimate and complex of sciences, being colonized by a trend we’ve seen elsewhere in tech for years: Prices fall far enough to change the rules of access, newcomers hack the system in defiance of the old standards and business models. Oceans of new data turn up, changing the entire process of understanding.

We’ve seen it happen in enough places to know the pattern. Open source Linux, cheap and attractive enough for all kinds of people to improve it for free, wiped out the old computer server industry. WiFi was open source too, so the price was right and interest surged.

The tech doesn’t have to be open source, or free, either. Economic cycles play a part. When the Internet bubble burst, space companies like Iridium and Globalstar, Rotary Rocket and Kistler, crashed. Lots of cheap talent and parts hit the market, which enabled Elon to do Space X. I once did a story about how entertainment in Africa changed after the price of satellite dishes fell below $200, and the tech moved from expatriate compounds to local bars.

The cost of biological experimentation is on a far crazier decline, giving Ashlee a lot of material. Twenty-three years after the first human genome was sequenced at a cost of $2.7 billion, a “complete genetic engineering home lab,” with a refurbished DNA sequencing machine and a “Bioengineering 101 Course” can be yours for $2500. Neurotechnology tools are available for sale or rent, so you can try neural implants at home. China is spinning up dozens of brain-computer interface startups.

“They’ve got a city in China that’s just doing brain technology stuff,” says Ashlee. When I lived in Asia 30 years ago, cities in China were famous for specializing in things like athletic socks and bras, wiping out the competition worldwide by cranking out more stuff more cheaply than anyone else. Now that the abundance of data and the cheapness of commute have kicked off the AI revolution, they have turned to brain tech. I pick at my kale.

Of course, just because the prices are a fraction of what they used to be, and these new hackers are descending on San Francisco, Cambridge, Miami, and who knows where else, it doesn’t mean breakthroughs are at hand. Biology is a lot more complex than electronics – a lot. Perhaps even more important, the new AI technology that people hope will enable all kinds of bio breakthroughs requires enormous amounts of data. The data set has to be huge, it has to be gathered in a single place the AI can access, and perhaps most critically of all, it has to be standardized to the highest quality…

… The biohackers face a big quality issue too. The Nobel Prize-winning protein information made use of some of the cleanest data possible, and Waymo came out of Alphabet’s cutting-edge sensor- and data-analysis labs. The guy in some converted Apartment 3G doing the thing with the iguana liver, the woman in the co-working space with the rat pituitary, they’re probably not going to bring the same magic.

“Yeah, but they’re not the only ones doing this,” says Ashlee. “I just had on Jennifer Doudna.” Doudna, who won a Nobel prize for her work on gene editing, now runs the Innovative Genomics Institute, a place rigorously pursuing this knowledge following traditional standards. She makes a couple of excellent points in Ashlee’s interview. She thinks a lot of the gunslinger biohackers will find biology much more complex and problematic than they think. At the same time, she expects a lot of the regulatory hurdles to new ways of doing things will become familiar over time, lowering the steps and costs of bringing out new drugs and treatments.

These lower costs will make more things possible, and attract more innovation. This will drive crazy a health and insurance industry built around high costs. If history is any guide, the incumbents won’t surrender their high-cost businesses without a fight. That may be one reason why Doudna thinks that big genetic alterations, will show up in agriculture first…

… Which, apparently, at this point isn’t weird enough. “I’ve got to catch up with this university researcher I met at a party,” he says, pushing away his plate. “She’s working on transplanting the personality of one animal, like a dolphin, to another, like a cow.”

“You mean, like you get a cow that wants to body surf in the wake of a tourist boat?”

He nods. “I know. Weird shit, right?”

I barely know what to do with this one, but I’m still in my “Dr. Moreau” zone.

“So maybe someday, instead of capital punishment, a convicted murder will receive the personality of a Labrador Retriever?”

“Could be,” he says. “Who knows what people do with this stuff.”

“Has there ever been a time when people were creating a future this weird, when people were going to live in ways they couldn’t even recognize?”

“I dunno,” he says. “Explorer times?”

“I mean yeah, maybe for the Aztecs at first, when they saw the Conquistadors on their horses and thought it was some new kind of hybrid god/animal. But pretty soon the Spanish guys got off their horses and just started messing up the city and killing people. Pretty much like the Aztecs had been doing for a couple of generations. Business as usual.”

“I feel you,” he says. “Hey, I got to go. There’s some guys in Argentina who have this satellite and space tug that went off course. It’s like 50 million kilometers from Earth, but they think they can bring it back.” Weird stuff…

Biohacking in SF, where Dr. Moreau’s a piker, & humanoid robots are a happy delusion. Eminently worth reading in full: “Kale Salad with Ash.”

For more on the dizzying pace of experimentation (this time, in AI), pair with “Agent Claw.”

[Image above: source]

* “At such times the universe gets a little closer to us. They are strange times, times of beginnings and endings. Dangerous and powerful. And we feel it even if we don’t know what it is. These times are not necessarily good, and not necessarily bad. In fact, what they are depends on what we are.” – Terry Pratchett, I Shall Wear Midnight

###

As we FAFO, we might recall that it was on this date in 1897 that the Indiana State House of Representatives passed Bill No.246 which gave pi the exact value of 3.2– a nice, round– and wrong– number.

Hoosier Dr. Edwin J. Goodwin, M.D, a mathematics enthusiast, satisfied himself that he’d succeeded in “squaring the circle.”  Hoping to share with his home state the fame that would surely be forthcoming, Dr. Goodwin drafted legislation that would make Indiana the first to declare the value of pi as law, and convinced Representative Taylor I. Record, a farmer and lumber merchant, to introduce it.  As an incentive, Dr. Goodwin, who planned to copyright his “discovery,” offered in the bill to make it available to Indiana textbooks at no cost.

It seems likely that few members of the House understood the bill (many said so during the debate), crammed as it was with 19th century mathematical jargon.  Indeed, as Peter Beckmann wrote in his History of Pi, the bill contained “hair-raising statements which not only contradict elementary geometry, but also appear to contradict each other.”  (Full text of the bill here.)  Still, it sailed through the House.

As it happened, Professor Clarence Abiathar Waldo, the head of the Purdue University Mathematics Department and author of a book titled Manual of Descriptive Geometry, was in the Statehouse lobbying for the University’s budget appropriation as the final debate and vote were underway. He was astonished to find the General Assembly debating mathematical legislation.  Naturally, he listened in… and he was horrified.

On February 11 the legislation was introduced in the Senate and referred to the Committee on Temperance, which reported the bill favorably the next day, and sent it to the Senate floor for debate.

But Professor Waldo had “coached” (as he later put it) a number of key Senators on the bill, so this time its reception was different.  According to an Indianapolis News report of February 13,

…the bill was brought up and made fun of. The Senators made bad puns about it, ridiculed it and laughed over it. The fun lasted half an hour. Senator Hubbell said that it was not meet for the Senate, which was costing the State $250 a day, to waste its time in such frivolity. He said that in reading the leading newspapers of Chicago and the East, he found that the Indiana State Legislature had laid itself open to ridicule by the action already taken on the bill. He thought consideration of such a propostion was not dignified or worthy of the Senate. He moved the indefinite postponement of the bill, and the motion carried.

As one watches state governments around the U.S. enacting similarly nonsensical, unscientific legislation (e.g., here… perhaps legislators went to school on this), one might be forgiven for wondering “Where’s Waldo?”

Black and white photo of Professor Clarence A. Waldo, a mathematics instructor at Purdue University, standing in front of a classroom in 1899.

 source

Written by (Roughly) Daily

February 5, 2026 at 1:00 am

“Sitting with a deck of cards in your hand all day is an obsession”*…

Cover of the book 'Cards as Weapons' featuring Ricky Jay, a magician depicted with cards and various elements related to magic and performance.

Long-time reader will know of your correspondent’s affection and respect for the late, great Ricky Jay (see. e.g., here and here). The estimable Quentin Hardy (and here), recalls the happy experience of seeing Jay perform his remarkable stage show, “Ricky Jay and His 52 Assistants” (“who were, of course, an ordinary deck of cards, serving under his complete domination”) and the realization that it triggered…

… Ricky Jay – it seems absurd to reduce that mellifluous name to its given or surname components, and parodically stuffy to write “Mr. Jay” – was primarily a close magician, moving cards and coins in all sorts of magical ways. He was also renowned as a card thrower, onstage penetrating a watermelon at 10 paces, and tossing a card as far as 190 feet, or at 90 miles per hour. He was an actor, an engaging writer, a bibliophile, and a deeply learned historian of freaks, cons, conjurers, armless calligraphers, and other nonstandard humans.

What I saw of his secret, I believe, illuminated his talent and his other motivating interests.

I don’t remember details of his lacerating onstage game, though it was excellent entertainment for us marks and his audience. After a couple of minutes we were swept off so he could move on to another amazement. But not before I saw his thumb.

Ricky Jay’s thumb was a seemingly unassuming digit, at rest beneath the clever patter, the astonishing cards dancing across the table, and the beautiful fingers controlling the cards’ movements, then recalling them to their correct place in the deck. By chance, I noticed this thumb running alongside the deck in between deals, and even though the magician was talking to me I sensed a sensitive side communication between the thumb and the man.

It was akin to watching wild nature, when an animal’s excellence is at one with its environment. No, it was better: It was wild nature guided by a fierce human intelligence. I saw him talking to the audience, but he was in a side conversation with a thumb that knew by feel where every card was. This knowledge was the outcome of focused years, which had extended the man’s talent beyond his body into the deck of cards. The state would be aspirational, except a dolt such as I (and, sorry, likely you too, dear reader) can hardly imagine this state of perfection.

What was his trick? The trick was training so deep that his thumb knew where every card was, and could say where it needed to go next. While he was talking, he was checking in with it, making sure everything was in its place as he readied himself for the next seamless adventure.

This may sound comical, but I was awed by a moment of man and thumb, and all that had gone into it. I saw hours of work, a pursuit beyond training with the goal of melding oneself with an object, until the practitioner and the object are completely attuned.

There are other examples of this fusion of identity with an action or object. Jimi Hendrix, as he moved from a band guitarist to a phenomenon, practiced leaning against a wall, so he wouldn’t hurt himself when he fell asleep. W.C. Fields, like Ricky Jay the product of a childhood he’d as soon forget, practiced the juggling that made him a vaudeville star over a boardinghouse bed, so he could likewise collapse, then get up and resume. The classical violinist Chee-yun Kim, who fell asleep playing the piano at age 3 (her mother, terrified, moved her to the fiddle), once forgot to eat during a three-day recording session. There are many more examples…

… I think about Ricky Jay’s thumb, and practicing so hard that part of you enters a physical object, when I think of his breakthrough book, “Learned Pigs and Fireproof Women.” A compendium of extraordinary performers in history, it memorializes the high divers, master memorizers, poison drinkers and fire resisters, and the woman who wrote, simultaneously, four different words with her hands and feet. Some performers are mountebanks, but the most moving passages are about people whose circumstances compelled them to will themselves into something superhuman.

It may be necessity, as in the case of the armless pianist who played with his toes. Or it may be pure chance, as befell Leon Rauch, a hallucinating teenage runaway who met a conjuror, and threw himself into close magic and contortionism. He gained worldwide fame as LaRoche, when he trapped himself in a small sphere and shifted his center of gravity sufficiently to roll up a 50-foot vertical spiral, an adult curled up like a fetus, dazzling the world as he climbed far above them. Far from his origins, too. Call it “dedication” or “obsession.” The goal is transformation, and an escape into a new self.

Ricky Jay, and many other extraordinary entertainers, encourage their reputation as hard-edged guys in a hard world. Indeed, both he and his mentor, the magician Dai Vernon, sought out card cheats, con men, fakes, and other scoundrels. They were searching for the mechanics of their treachery on the unwitting. These villains were presumably not interested in transformation, but simply grift.

Over the years. I have given copies of “Learned Pigs” to more than one acquaintance going through a difficult time, and to this day I keep a few spare copies on hand. It works like magic. Until today I have not disclosed Ricky Jay’s secret: There is no secret, there is only the desire and will for transformation that is inside us all…

A great magician, and an estimable escape artist of a different kind: “Ricky Jay’s Thumb.”

* Ricky Jay

###

As we shuffle and cut, we might recall that it was on this date in 1975 that the master tapes of the ELO album, Face the Music, went to the pressing plant. It featured “Strange Magic” and was their first to earn a platinum record.

Cover of the Electric Light Orchestra album 'Face the Music' featuring an eerie scene with an electric chair and dramatic lighting.

source

Written by (Roughly) Daily

August 30, 2025 at 1:00 am

“We shape our tools and thereafter our tools shape us”*…

A late 19th C. illustration of 18th-C. people, gobsmacked by the many tech changes that have made their world irrelevant

AI is on the march, with implications, TBD, for… well, for everything. Nayef Al-Rodhan ponders its potential impact on philosophy…

Around the world, Artificial Intelligence (AI) is seeping into every aspect of our daily life, transforming our computational power, and with it the manufacturing speed, military capabilities, and the fabric of our societies. Generative AI applications such as OpenAI’s ChatGPT, the fastest growing consumer application in history, have created both positive anticipation and alarm about the future potential of AI technology. Predictions range from doomsday scenarios describing the extinction of the human species to optimistic takes on how it could revolutionise the way we work, live and communicate. If used correctly, AI could catapult scientific, economic and technological advances into a new phase in human history. In doing so it has the potential to solve some of humanity’s biggest problems by preventing serious food and water scarcitymitigating inequality and povertydiagnosing life-threatening diseases, tackling climate change, preventing pandemics, designing new game-changing proteins, and much more. 

AI technology is rapidly moving in the direction of Artificial General Intelligence (AGI), the ability to achieve human-level machine intelligence, with Google’s AI Chief recently predicting that there is a 50% chance that we’ll reach AGI within five years. This raises important questions about our human nature, our sentience, and our dignity needs. Can AI ever become truly sentient? If so, how will we know if that happens? Should sentient machines share similar rights and responsibilities as humans? The boardroom drama at OpenAI in late November 2023 also deepened the debate about the dangers of techno-capitalism: is it possible for corporate giants in the AI space to balance safety with the pursuit of revenues and profit? 

As AI advances at a breakneck speed, ethical considerations are becoming increasingly critical. Sentient AI implies that the technology has the capacity to evolve and be self-aware, in doing so feeling and experiencing the world just like a human would. According to the British mathematician Alan Turing, if the human cannot distinguish between whether it is conversing with an AI or another human, then the AI in question has passed the test. However, given AI’s sophisticated conversational skills and ability to give the impression of consciousness, the Turing Test is becoming too narrow and does not grasp all the nuances of what makes us sentient and, more broadly, human. To stay on the front foot of technological progress, we need to supplement the Turing Test with transdisciplinary frameworks for evaluating increasingly human-like AI. These frameworks should be based on approaches rooted in psychology, neuroscience, philosophy, the social sciences, political science and other relevant disciplines. 

We do not yet have a full understanding of what makes a thing sentient but transdisciplinary efforts by neuroscientists, computer scientists and philosophers are helping develop a deeper understanding of consciousness and sentience. So far, we have found that emotions are one of the important characteristics needed for sentience, as is agency or intrinsic motivation. A sentient AI would need to have the ability to create autonomous goals and an ability to pursue these goals. In human beings, this quality has evolved from our intrinsic survival instinct, while in AI it is still, for now, lacking. According to recent studies, a sense of time, narrative, and memory is also critical for determining sentience. A level of sentence comparable to humans would require autobiographical memory and a concept of the linear progression of time. In current AI systems, these capabilities are limited – but recent developments raise uncomfortable philosophical questions about whether sentient AI should share similar rights and responsibilities in the event that it becomes a reality. And if so, how does one hold the technology accountable for their actions? And how will we define – legally and ethically – sentient AI’s role in society? We currently treat AI technology and machines as property, so how will this change if they are granted their own rights? There is no clear-cut answer, but as I argued in ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’, we should attribute agency to machines whenever they appear to possess the same qualities that characterise humans. I also believe that machines ought to be treated as agents if they prove themselves to be emotional, amoral, and egoist. 

These debates, however they unfold, will clearly have deep implications on the future of philosophy itself. In ‘Transdisciplinarity, neuro-techno-philosophy, and the future of philosophy’ I make the case that it is a short step from AI’s present capabilities to its potential future use developing novel philosophical hypotheses and thought experiments. It is therefore not unthinkable that future AI systems could break new ground in the field of normative ethics, helping pinpoint moral principles that human philosophers have failed to grasp. However, we should be mindful that their conception of morality or beauty, for example, may have nothing in common with ours, or it may supersede our own capacities and reflections. This could limit the ability of sophisticated artificial agents to answer long-standing philosophical questions, however superior they may be to the most advanced human intellectual output. We should consider how these developments are likely to impact how we understand the world around us, both in terms of the subject matter and of the theorising entity involved. Artificial agents will no doubt be put under the microscope and will be studied alongside the human mind and human nature: not just to compare and contrast, but also to understand how these artificial entities relate to – and treat – one another, and humanity itself. There is also the question of how human philosophers will react if and when AI-steered machines become superior philosophical theorisers. Will flesh and blood philosophers be forced to compete cognitively with entities whose intellectual abilities vastly supersede our own? Will AI systems overtake our limited human reasoning and reflective capacities? If this happens, what does this mean for our own human agency, the control we have over our lives and the future of our societies?…

… Powerful AI technologies will progressively increase our capabilities, for good or ill. We therefore need to be clear-sighted about the AI governance frameworks urgently needed to futureproof the safe use of AI. The recent high drama at OpenAI, whose founding mission is “to ensure that artificial general intelligence benefits all of humanity”, gave us a glimpse of the main rift in the AI industry, pitting those focused on commercial growth against those uneasy with the potential ramifications of the unbridled development of AI. However well-motivated AI governance schemes might be, they are less robust than one would hope. At the same time, self-regulation by global tech companies is becoming increasingly difficult given the large sums at stake and the economic and political influence of these companies.

With this in mind, we must keep an open mind not just about the immediate man-made dangers of AI technologies but also their potential to redefine what it means to be human. They will shape how we understand and engage with the world, in doing so making us reevaluate our place in it. Our chances of survival as a species and the likelihood of our existence in a free, independent, peaceful, prosperous, creative and dignified world will depend on the future trajectory of AI. Our historical yearning for longing and belonging hangs in the balance. To protect citizens from potential harm and limit the risks, AI should be regulated just like any other technology. We must also apply transdisciplinary approaches to make sure that the use and governance of AI is always steered by human dignity needs for all, at all times and under all circumstances. AI’s trajectory is not predetermined, but the clock is ticking and humanity may have less time than it thinks to control its collective destiny… 

Eminently worth reading in full. Whether or not one agrees with the author’s specific conclusions, his larger point– that we need to be mindful and purposive about the deployment of AI is surely well-taken: “Sentience, Safe AI and The Future of Philosophy: A Transdisciplinary Analysis,” from @SustainHistory in @oxpubphil.

See also: “Thinking About AI, Before AI Disappears” from Quentin Hardy‘s new newsletter, Technohumanism. (source of image above).

* Father John Culkin, SJ, a Professor of Communication at Fordham University (and friend of Marshall McLuhan, to whom the quote is often incorrectly attributed)

###

As we think about thinking, we might recall that it was on this date in 1979 that Apple began work on the Lisa, which would become the world’s first commercial computer with a graphical user interface.

Originally intended to sell for $2,000 and ship in 1981, the Lisa was delayed until 1983 and sold for $10,000. Utilizing technology ahead of its time, its high cost, relative lack of software, and some hardware reliability issues ultimately sank the success of the Lisa. Still, much of the technology introduced by the Lisa (itself rooted in the earlier work of Doug Engelbart [and here] and Xerox PARC) influenced the development of the Macintosh as well as other future computer and operating system designs: e.g., a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), (Ethernet) networking, file servers, print servers, and email.

The Lisa, with its development team (source)