(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

Humor is said to be the quintessential humor capacity, last thing that AI could– will?– conquer…

New Yorker cartoons are inextricably woven into the fabric of American visual culture. With an instantly recognizable formula — usually, a black-and-white drawing of an imagined scenario followed by a quippy caption in sleek Caslon Pro Italic — the daily gags are delightful satires of our shared human experience, riffing on everything from cats and produce shopping to climate change and the COVID-19 pandemic. The New Yorker‘s famous Cartoon Caption Contest, which asks readers to submit their wittiest one-liners, gets an average 5,732 entries each week, and the magazine receives thousands of drawings every month from hopeful artists.

What if a computer tried its hand at the iconic comics?

Playing on their ubiquity and familiarity, comics artist Ilan Manouach and AI engineer Ioannis [or Yiannis] Siglidis developed the Neural Yorker, an artificial intelligence (AI) engine that posts computer-generated cartoons on Twitter. The project consists of image-and-caption combinations produced by a generative adversarial network (GAN), a deep-learning-based model. The network is trained using a database of punchlines and images of cartoons found online and then “learns” to create new gags in the New Yorker‘s iconic style, with hilarious (and sometimes unsettling) results…

Comics artist Ilan Manouach (@IlanManouach) and AI engineer Yiannis Siglidis created The Neural Yorker: “Computer-Generated New Yorker Cartoons Are Delightfully Weird.”

For comparison’s sake, see “142 Of The Funniest New Yorker Cartoons Ever.”

Alan Kay

###

As we go for the guffaw, we might recall that it was on this date in 1922 that the first chapter in Walt Disney’s career as an animator came to a close when he released the 7th and next-to-last “Laugh-O-Gram” cartoon adaption of a fairy tale, “Jack the Giant Killer.”

Disney’s first animated films began in 1920 as after-work projects when Disney was a commercial artist for an advertising company in Kansas City. He made these cartoons by himself and with the help of a few friends.

He started by persuading Frank Newman, Kansas City’s leading exhibitor, to include short snippets of animation in the series of weekly newsreels Newman produced for his chain of three theaters. Tactfully called “Newman Laugh-O-grams,” Disney’s footage was meant to mix advertising with topical humor…

The Laugh-O-grams were a hit, leading to commissions for animated intermission fillers and coming attractions slides for Newman’s theaters. Spurred by his success, the 19-year-old Disney decided to try something more ambitious: animated fairy tales. Influenced by New York animator Paul Terry’s spoofs of Aesop’s Fables, which had premiered in June 1920, Disney decided not only to parody fairy-tale classics but also to modernize them by having them playing off recent events. With the help of high school student Rudy Ising, who later co-founded the Warner Brothers and MGM cartoon studios, and other local would-be cartoonists, Disney [made 7 animated shorts, of which “Jack, the Giant Killer” was the penultimate].

Walt Disney’s Laugh-O-grams

“Foresight begins when we accept that we are now creating a civilization of risk”*…

There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…

As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Agre’s papers are here.

* Jacques Ellul

###

As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.

Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.

After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.

source

“The future is already here — it’s just not very evenly distributed”*…

Brewarrina Aboriginal Fish Traps, 1883 (source)

The future is not a destination. We build it every day in the present. This is, perhaps, a wild paraphrasing of the acclaimed author and futurist William Gibson who, when asked what a distant future might hold, replied that the future was already here, it was just unevenly distributed. I often ponder this Gibson provocation, wondering where around me the future might be lurking. Catching glimpses of the future in the present would be helpful. But then, I think, rather than hoping to see a glimpse of the future, we could instead actively build one. Or at the very least tell stories about what it might be. Stories that unfold a world or worlds in which we might want to live – neither dystopian nor utopian, but ours. I know we can still shape those worlds and make them into somewhere that reflects our humanity, our different cultures and our cares.

Of course, it is not enough to tell stories about some distant or unevenly distributed future; we need to find ways of disrupting the present too. It might be less important to have a compelling and coherent vision of the future than an active and considered approach to building possible futures. It is as much about critical doing as critical thinking. One approach to the future might be to focus less on the instruments of technologies per se and more on the broader systems that will be necessary to bring those futures into existence…

It might be less important to have a compelling and coherent vision of the future than an active and considered approach to building possible futures. It is as much about critical doing as critical thinking…

AI is always, and already, a lot more than just a constellation of technologies. It exists as a set of conversations in which we are all implicated: we discuss AI, worry out loud about its ethical frameworks, watch movies in which it figures centrally, and read news stories about its impact…

[S]tories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together…

When I returned to Australia in 2017, I wanted to build other futures and to acknowledge the country where my work had started and where I was now working again. I knew I needed to find a different world and a different intersection, and to find new ways to tell stories of technology and of the future – I wanted some different pasts and some different touchstones.

I first saw a photograph of the Brewarrina Aboriginal Fish Traps in a Guardian news article, and the image stayed with me.. That black-­and-­white photograph from the late 1800s showed long, sweeping lines of grey stones arcing across a fast-­moving river. The water flowing around the lines of stones was tipped white at the breakpoints. And although there was no one in the image, the arrangement of the stones was deliberate, human-­made and enduring. It was a photograph of the one of the oldest known human-­built technical systems on the planet. And while there are ongoing debates about its exact age – 4,000 years, 10,000 years, 40,000 thousand years – there are no arguments about its complexity or sophistication…

I came to think that the importance of this place was not about the traps per se. It was about the system those traps create, and the systems in which they are, themselves, embedded. This is a system thousands of years in the making and keeping. This is a system that required concerted and continuous effort. This was something that required generations, both of accumulated knowledge about how the environment worked and accumulated knowledge about hydrology and about fish, and an accumulated commitment to continuing to build, sustain and upgrade that system over time.

The technical, cultural and ecological elements cement the significance of this place, not only as a heritage site but as a knowledge base on which contemporary systems could be built. Ideas about sustainability; ideas about systems that are decades or centuries in the making; ideas about systems that endure and systems that are built explicitly to endure. Systems that are built to ensure the continuities of culture feel like the kind of systems that we might want to be investing in now. This feels like the outline of a story of the future we would want to tell…

Now, we need to make a different kind of story about the future. One that focuses not just on the technologies, but on the systems in which these technologies will reside. The opportunity to focus on a future that holds those systems – and also on a way of approaching them in the present – feels both immense and acute. And the ways we might need to disrupt the present feel especially important in this moment of liminality, disorientation and profound unease, socially and ecologically. In a present where the links towards the future seem to have been derailed from the tracks we’ve laid in past decades, there is an opportunity to reform. Ultimately, we would need to think a little differently, ask different kinds of questions, bring as many diverse and divergent kinds of people along on the journey and look holistically and critically at the many propositions that computing in particular – and advanced technologies in general – present.

For me, the Brewarrina Fish Traps are a powerful way of framing how current technological systems should and could unfold. These present a very different future, one we can glimpse in the present and in the past; one that always is and always will be. In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.

Genevieve Bell (@feraldata) on the importance of stories of systems, serendipity, and grace: “Touching the future.” (via Sentiers)

For more, see her Long Now talk, “The 4th Industrial Revolution: Responsible & Secure AI.”

And for an extended riff on the context and implications of the Richard Brautigan poem that she quotes in her piece, see Adam Curtis’ “All Watched Over By Machines Of Loving Grace” (streaming on Amazon Prime).

And for an apposite look at the Renaissance, when mechanical inventions served as a medium for experimental thinking about all aspects of the cosmos, see “When Engineers Were Humanists.”

* William Gibson (in an interview on Fresh Air in August, 1993; repeated by him– and others– many, many times since)

###

As we think like good ancestors, we might spare a thought for Henry, Duke of Cornwall. The the first child of King Henry VIII of England and his first wife, Catherine of Aragon, celebrated as the heir apparent, he died within weeks of his birth, on this date in 1511. His death and Henry VIII’s failure to produce another surviving male heir with Catherine led to succession and marriage crises that affected the relationship between the English church and Roman Catholicism, giving rise to the English Reformation.

Michael Sittow’s Virgin and Child. The woman appears to have been modelled on Catherine of Aragon.

source

“Facts alone, no matter how numerous or verifiable, do not automatically arrange themselves into an intelligible, or truthful, picture of the world. It is the task of the human mind to invent a theoretical framework to account for them.”*…

PPPL physicist Hong Qin in front of images of planetary orbits and computer code

… or maybe not. A couple of decades ago, your correspondent came across a short book that aimed to explain how we think know what we think know, Truth– a history and guide of the perplexed, by Felipe Fernández-Armesto (then, a professor of history at Oxford; now, at Notre Dame)…

According to Fernández-Armesto, people throughout history have sought to get at the truth in one or more of four basic ways. The first is through feeling. Truth is a tangible entity. The third-century B.C. Chinese sage Chuang Tzu stated, ”The universe is one.” Others described the universe as a unity of opposites. To the fifth-century B.C. Greek philosopher Heraclitus, the cosmos is a tension like that of the bow or the lyre. The notion of chaos comes along only later, together with uncomfortable concepts like infinity.

Then there is authoritarianism, ”the truth you are told.” Divinities can tell us what is wanted, if only we can discover how to hear them. The ancient Greeks believed that Apollo would speak through the mouth of an old peasant woman in a room filled with the smoke of bay leaves; traditionalist Azande in the Nilotic Sudan depend on the response of poisoned chickens. People consult sacred books, or watch for apparitions. Others look inside themselves, for truths that were imprinted in their minds before they were born or buried in their subconscious minds.

Reasoning is the third way Fernández-Armesto cites. Since knowledge attained by divination or introspection is subject to misinterpretation, eventually people return to the use of reason, which helped thinkers like Chuang Tzu and Heraclitus describe the universe. Logical analysis was used in China and Egypt long before it was discovered in Greece and in India. If the Greeks are mistakenly credited with the invention of rational thinking, it is because of the effective ways they wrote about it. Plato illustrated his dialogues with memorable myths and brilliant metaphors. Truth, as he saw it, could be discovered only by abstract reasoning, without reliance on sense perception or observation of outside phenomena. Rather, he sought to excavate it from the recesses of the mind. The word for truth in Greek, aletheia, means ”what is not forgotten.”

Plato’s pupil Aristotle developed the techniques of logical analysis that still enable us to get at the knowledge hidden within us. He examined propositions by stating possible contradictions and developed the syllogism, a method of proof based on stated premises. His methods of reasoning have influenced independent thinkers ever since. Logicians developed a system of notation, free from the associations of language, that comes close to being a kind of mathematics. The uses of pure reason have had a particular appeal to lovers of force, and have flourished in times of absolutism like the 17th and 18th centuries.

Finally, there is sense perception. Unlike his teacher, Plato, and many of Plato’s followers, Aristotle realized that pure logic had its limits. He began with study of the natural world and used evidence gained from experience or experimentation to support his arguments. Ever since, as Fernández-Armesto puts it, science and sense have kept time together, like voices in a duet that sing different tunes. The combination of theoretical and practical gave Western thinkers an edge over purer reasoning schemes in India and China.

The scientific revolution began when European thinkers broke free from religious authoritarianism and stopped regarding this earth as the center of the universe. They used mathematics along with experimentation and reasoning and developed mechanical tools like the telescope. Fernández-Armesto’s favorite example of their empirical spirit is the grueling Arctic expedition in 1736 in which the French scientist Pierre Moreau de Maupertuis determined (rightly) that the earth was not round like a ball but rather an oblate spheroid…

source

One of Fernández-Armesto most basic points is that our capacity to apprehend “the truth”– to “know”– has developed throughout history. And history’s not over. So, your correspondent wondered, mightn’t there emerge a fifth source of truth, one rooted in the assessment of vast, ever-more-complete data maps of reality– a fifth way of knowing?

Well, those days may be upon us…

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

he algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a ‘serving algorithm,’ then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”

The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless ‘translate’ a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

“I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”

Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” [Qin’s collaborator Eric] Palmerduca said…

But then, as Edwin Hubble observed, “observations always involve theory,” theory that’s implicit in the particulars and the structure of the data being collected and fed to the AI. So, perhaps this is less a new way of knowing, than a new way of enhancing Fernández-Armesto’s third way– reason– as it became the scientific method…

The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”

In either case: “New machine learning theory raises questions about nature of science.”

Francis Bello

###

As we experiment with epistemology, we might send carefully-observed and calculated birthday greetings to Georg Joachim de Porris (better known by his professional name, Rheticus; he was born on this date in 1514. A mathematician, astronomer, cartographer, navigational-instrument maker, medical practitioner, and teacher, he was well-known in his day for his stature in all of those fields. But he is surely best-remembered as the sole pupil of Copernicus, whose work he championed– most impactfully, facilitating the publication of his master’s De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres)… and informing the most famous work by yesterday’s birthday boy, Galileo.

source

“A year spent in artificial intelligence is enough to make one believe in God”*…

A scan of the workings of an automaton of a friar, c1550. Possibly circle of Juanelo Turriano (c1500-85), probably Spanish.

The wooden monk, a little over two feet tall, ambles in a circle. Periodically, he raises a gripped cross and rosary towards his lips and his jaw drops like a marionette’s, affixing a kiss to the crucifix. Throughout his supplications, those same lips seem to mumble, as if he’s quietly uttering penitential prayers, and occasionally the tiny monk will raise his empty fist to his torso as he beats his breast. His head is finely detailed, a tawny chestnut colour with a regal Roman nose and dark hooded eyes, his pate scraped clean of even a tonsure. For almost five centuries, the carved clergyman has made his rounds, wound up by an ingenious internal mechanism hidden underneath his carved Franciscan robes, a monastic robot making his clockwork prayers.

Today his home is the Smithsonian National Museum of American History in Washington, DC, but before that he resided in that distinctly un-Catholic city of Geneva. His origins are more mysterious, though similar divine automata have been attributed to Juanelo Turriano, the 16th-century Italian engineer and royal clockmaker to the Habsburgs. Following Philip II’s son’s recovery from an illness, the reverential king supposedly commissioned Turriano to answer God’s miracle with a miracle of his own. Scion of the Habsburgs’ massive fortune of Aztec and Incan gold, hammer against the Protestant English and patron of the Spanish Inquisition, Philip II was every inch a Catholic zealot whom the British writer and philosopher G K Chesterton described as having a face ‘as a fungus of a leprous white and grey’, overseeing his empire in rooms where ‘walls are hung with velvet that is black and soft as sin’. It’s a description that evokes similarly uncanny feelings for any who should view Turriano’s monk, for there is one inviolate rule about the robot: he is creepy.

Elizabeth King, an American sculptor and historian, notes that an ‘uncanny presence separates it immediately from later automata: it is not charming, it is not a toy … it engages even the 20th-century viewer in a complicated and urgent way.’ The late Spanish engineer José A García-Diego is even more unsparing: the device, he wrote, is ‘considerably unpleasant’. One reason for his unsettling quality is that the monk’s purpose isn’t to provide simulacra of prayer, but to actually pray. Turriano’s device doesn’t serve to imitate supplication, he is supplicating; the mechanism isn’t depicting penitence, the machine performs it…

The writer Jonathan Merritt has argued in The Atlantic that rapidly escalating technological change has theological implications far beyond the political, social and ethical questions that Pope Francis raises, claiming that the development of self-aware computers would have implications for our definition of the soul, our beliefs about sin and redemption, our ideas about free will and providence. ‘If Christians accept that all creation is intended to glorify God,’ Merritt asked, ‘how would AI do such a thing? Would AI attend church, sing hymns, care for the poor? Would it pray?’ Of course, to the last question we already have an answer: AI would pray, because as Turriano’s example shows, it already has. Pope Francis also anticipated this in his November prayers, saying of AI ‘may it “be human”.’

While nobody believes that consciousness resides within the wooden head of a toy like Turriano’s, no matter how immaculately constructed, his disquieting example serves to illustrate what it might mean for an artificial intelligence in the future to be able to orient itself towards the divine. How different traditions might respond to this is difficult to anticipate. For Christians invested in the concept of an eternal human soul, a synthetic spirit might be a contradiction. Buddhist and Hindu believers, whose traditions are more apt to see the individual soul as a smaller part of a larger system, might be more amenable to the idea of spiritual machines. That’s the language that the futurist Ray Kurzweil used in calling our upcoming epoch the ‘age of spiritual machines’; perhaps it’s just as appropriate to think of it as the ‘Age of Turriano’, since these issues have long been simmering in the theological background, only waiting to boil over in the coming decades.

If an artificial intelligence – a computer, a robot, an android – is capable of complex thought, of reason, of emotion, then in what sense can it be said to have a soul? How does traditional religion react to a constructed person, at one remove from divine origins, and how are we to reconcile its role in the metaphysical order? Can we speak of salvation and damnation for digital beings? And is there any way in which we can evangelise robots or convert computers? Even for steadfast secularists and materialists, for whom those questions make no philosophical sense for humans, much less computers, that this will become a theological flashpoint for believers is something to anticipate, as it will doubtlessly have massive social, cultural and political ramifications.

This is no scholastic issue of how many angels can dance on a silicon chip, since it seems inevitable that computer scientists will soon be able to develop an artificial intelligence that easily passes the Turing test, that surpasses the understanding of those who’ve programmed it. In an article for CNBC entitled ‘Computers Will Be Like Humans By 2029’ (2014), the journalist Cadie Thompson quotes Kurzweil, who confidently (if controversially) contends that ‘computers will be at human levels, such as you can have a human relationship with them, 15 years from now.’ With less than a decade left to go, Kurzweil explains that he’s ‘talking about emotional intelligence. The ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy, that is the cutting edge of human intelligence, that is not a sideshow.’

Often grouped with other transhumanists who optimistically predict a coming millennium of digital transcendence, Kurzweil is a believer in what’s often called the ‘Singularity’, the moment at which humanity’s collective computing capabilities supersede our ability to understand the machines that we’ve created, and presumably some sort of artificial consciousness develops. While bracketing out the details, let’s assume that Kurzweil is broadly correct that, at some point in this century, an AI will develop that outstrips all past digital intelligences. If it’s true that automata can then be as funny, romantic, loving and sexy as the best of us, it could also be assumed that they’d be capable of piety, reverence and faith. When it’s possible to make not just a wind-up clock monk, but a computer that’s actually capable of prayer, how then will faith respond?..

Can a robot pray? Does an AI have a soul? Advances in automata raise theological debates that will shape the secular world; from Ed Simon (@WithEdSimon): “Machine in the ghost.” Do read the piece in full.

Then, for a different (but in the end, not altogether contradictory) view: “The Thoughts The Civilized Keep.”

And for another (related) angle: “Is it OK to torture a computer program?

For more on the work of sculptor and historian Elizabeth King on the Smithsonian automaton friar, please see her articles here and here, and her forthcoming book, Mysticism and Machinery.

Alan Perlis (first recipient of the Turing Award)

###

As we enlarge the tent, we might send revelatory birthday greetings to Albert Hofmann; he was born on this date in 1906.  As a young chemist at Sandoz in Switzerland, Hofmann was searching for a respiratory and circulatory stimulant when he fabricated lysergic acid diethylamide (LSD); handling it, he absorbed a bit through his fingertips and realized that the compound had psychoactive effects.  Three days later, on April 19, 1943– a day now known as “Bicycle Day”– Hofmann intentionally ingested 250 micrograms of LSD then rode home on a bike, a journey that became, pun intended, the first intentional acid trip.  Hofmann was also the first person to isolate, synthesize, and name the principal psychedelic mushroom compounds psilocybin and psilocin.

 source

%d bloggers like this: