Posts Tagged ‘morals’
“This incompleteness is all we have”*…
The impulse to “systemitize” morality is as old as philosophy. Many now hope that AI will discover and organize moral truths. But Elad Uzan suggests that Kurt Gödel’s work on incompleteness demonstrates that deciding what is right will always be our burden…
Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. Unlike human judges or policymakers, a machine would not be swayed by personal interests or lapses in reasoning. It does not lie. It does not accept bribes or pleas. It does not weep over hard decisions.
Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.
Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.
But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. Isaac Newton’s laws of motion and James Clerk Maxwell’s equations are classic examples: compact, elegant formulations from which wide-ranging predictions about the physical world can be deduced.
Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain – in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems. A consequentialist begins with the idea that actions should maximise wellbeing; a deontologist starts from the idea that actions must respect duties or rights. These basic commitments function similarly to their counterparts in physics: they define the structure of moral reasoning within each ethical theory.
Just as AI is used in physics to operate within existing theories – for example, to optimise experimental designs or predict the behaviour of complex systems – it can also be used in ethics to extend moral reasoning within a given framework. In physics, AI typically operates within established models rather than proposing new physical laws or conceptual frameworks. It may calculate how multiple forces interact and predict their combined effect on a physical system. Similarly, in ethics, AI does not generate new moral principles but applies existing ones to novel and often intricate situations. It may weigh competing values – fairness, harm minimisation, justice – and assess their combined implications for what action is morally best. The result is not a new moral system, but a deepened application of an existing one, shaped by the same kind of formal reasoning that underlies scientific modelling. But is there an inherent limit to what AI can know about morality? Could there be true ethical propositions that no machine, no matter how advanced, can ever prove?
These questions echo a fundamental discovery in mathematical logic, probably the most fundamental insight ever to be proven: Kurt Gödel’s incompleteness theorems. They show that any logical system powerful enough to describe arithmetic is either inconsistent or incomplete. In this essay, I argue that this limitation, though mathematical in origin, has deep consequences for ethics, and for how we design AI systems to reason morally…
Eminently worth reading in full: “The incompleteness of ethics,” from @aeon.co.
And as if that were not enough, consider the cultural challenge implicit in this chart:
More background at “Cultural Bias in LLMs” (and here and here).
* Charles Bukowski
###
As we own up to it, we might recall that it was on this date in 1942 that actress Hedy Lamarr and musician George Antheil received a patent (#2,292,387) for a frequency-hopping radio communication system which later became the basis for modern technologies like Bluetooth, wireless telephones, and Wi-Fi.
Hedy Lamarr made it big in acting before ever moving to the United States. Her role in the Czech film Ecstasy got international attention in 1933 for containing scandalous, intimate scenes that were unheard of in the movie industry up until then.
Backlash from her early acting career was the least of her worries, however, as tensions began to rise in Europe. Lamarr, born Hedwig Eva Maria Kiesler, grew up in a Catholic household in Austria, but both of her parents had a Jewish heritage. In addition, she was married to Friedrich Mandl, a rich ammunition manufacturer with connections to both Fascist Italy and Nazi Germany.
Her time with Friedrich Mandl was bittersweet. While the romance quickly died and Mandl became very possessive of his young wife, Lamarr was often taken to meetings on scientific innovations in the military world. These meetings are said to have been the spark that led to her becoming an inventor. As tensions in both her household and in the world around her became overwhelming, she fled Europe and found her way to the United States through a job offer from Hollywood’s MGM Studios.
Lamarr became one of the most sought-after leading women in Hollywood and starred in popular movies like the 1939 film Algiers, but once the United States began helping the Allies and preparing to possibly enter the war, Lamarr almost left Hollywood forever. Her eyes were no longer fixed on the bright lights of the film set but on the flashes of bombs and gunfire. Lamarr wanted to join the Inventors’ Council in Washington, DC, where she thought she would be of better service to the war effort.
Lamarr’s path to inventing the cornerstone of Wi-Fi began when she heard about the Navy’s difficulties with radio-controlled torpedoes. She recruited George Antheil, a composer she met through MGM Studios, in order to create what was known as a Secret Communication System.
The idea behind the invention was to create a system that constantly changed frequencies, making it difficult for the Axis powers to decode the radio messages. The invention would help the Navy make their torpedo systems become more stealthy and make it less likely for the torpedoes to be rendered useless by enemies.
Lamarr was the brains behind the invention, with her background knowledge in ammunition, and Antheil was the artist that brought it to life, using the piano for inspiration. In 1942, under her then-married name, Hedy Kiesler Markey, she filed for a patent for the Secret Communication System, patent case file 2,292,387, and proposed it to the Navy.
The first part of Lamarr and Antheil’s Secret Communication System story did not see a happy Hollywood ending. The Navy refused to accept the new technology during World War II. Not only did the invention come from a civilian, but it was complex and ahead of its time.
As the invention sat unused, Lamarr continued on in Hollywood and found other ways to help with the war effort, such as working with the USO. It wasn’t until Lamarr’s Hollywood career came to an end that her invention started gaining notice.
Around the time Lamarr filmed her last scene with the 1958 film The Female Animal, her patented invention caught the attention of other innovators in technology. The Secret Communication System saw use in the 1950s during the development of CDMA network technology in the private sector, while the Navy officially adopted the technology in the 1960s around the time of the Cuban Missile Crisis. The methods described in the patent assisted greatly in the development of Bluetooth and Wi-Fi.
Despite the world finally embracing the methods of the patent as early as the mid-to-late 1950s, the Lamarr-Antheil duo were not recognized and awarded for their invention until the late 1990s and early 2000s. They both received the Electronic Frontier Foundation Pioneer Award and the Bulbie Gnass Spirit of Achievement Bronze Award, and in 2014 they were inducted into the National Inventors Hall of Fame…

“Memory is a wonderfully useful tool, and without it judgement does its work with difficulty”*…
Alexander Chee shares a memory…
It is the year 2004 and I take a seat at the counter of the Koreatown Denny’s, just three blocks from my apartment, and for a little while, I watch as a blonde waitress with makeup the colors of a tropical fish smiles at me every time she walks by. Her path is constant: she arrives from one side, departs from the other, grabbing or leaving pots of coffee on the warmer. She leaves a cup with me at my request and, in this way, I become part of the ritual.
I am a little drunk from drinks and no food. The day has become a kind of strange dream, telescoping down to the menu in front of me. I am here in Los Angeles for what will turn out to be seven months but I don’t know this yet.
At the counter, on one side of me are two young men studying a text in Spanish, the books so thick I assume they are Bibles. They ignore their pancake stacks. On the other side, a grizzled man of middle age sits, eating a hot fudge sundae.
Let me ask you a question, asks the man with the sundae.
Sure, I reply.
Is there ever a reason, a moral reason, to take a man’s life.
He spoons through the last bit of the hot fudge, putting it in his mouth. His hair, gray wire like a shoe brush; his glasses fish-eye his eyes. Say he is a judge, he says, and he sent you to prison for three years, didn’t allow you to have a fair trial. You know he had it in for you.
I look away from him and see that what I thought were the Bibles of the men next to him are Plato’s Dialogues, translated into Spanish. A sign that I might be in a Greek tragedy.
You do the time, the man continues. You get out. Would you have a right to take his life. A moral right.
I am God’s monkey, I think to myself. Watch me dance.
No, I say. Your duty after you leave prison is to yourself. I say this while looking forward, as if we are both in a car and driving. A moment later, I glance sideways and see the man’s wild eyes settle for a moment.
The reason you’re angry is because he didn’t value your life. To go and try to take his, that destroys what might be left for you in life.
But what if it felt good to do it, he says.
I note the use of the past tense. A confession? The waitress walks by again. Pleasure isn’t the highest value in this life, I say. Pleasure is only pleasure. It has no good or bad to it. That wouldn’t be a moral reason, at least.
This questioner takes it in. Hmm, he says. Thanks, he says.
To destroy him is to take some or all of what you have left and destroy it, I say.
He nods. Thanks, he says.
He pays and leaves.
Did he believe me, I wonder. I will feel a little more alone after that night in some way I will never understand and always try to forget.
Beside me now are only the two students of Plato. I order the sampler, it comes fast—mozzarella sticks, chicken fingers, onion rings. I eat them all…
From the annals of existential encounters: “The Denny’s on Wilshire Boulevard,” from @alexanderchee.bsky.social, one of his regular “I Come Here Often” columns in the LARB Quarterly (@lareviewofbooks.bsky.social)
* Montaigne
###
As we wonder, we might note that today is, appropriately to the piece above, National Chicken Fingers Day.
“The mature person becomes able to differentiate feelings into as many nuances, strong and passionate experiences, or delicate and sensitive ones, as in the different passages of music in a symphony”*…
As with the heart, so the head… Joshua May, a professor with training in philosophy, the social sciences, and behavioral science, uses scientific research to examine moral controversies, ethics in science (and life), and the mechanics and philosophies of social change. In his teaching, his research, and his recent book, Neuroethics: Agency in the Age of Brain Science, he reminds us that binary, all-or-nothing arguments often rest on false dichotomies. He elaborates in an interview with JSTOR Daily…
How do moral, social, and political values influence the sciences? The social sciences? How can we become more virtuous in an era of AI, political polarization, and factory farming? These are just a few questions behind Joshua May’s wide-ranging body of research and teaching. In his own words, his work sits at “the intersection of ethics and science,” fed by a desire to understand moral controversies and social change—and the relationship between those things. He encourages us to resist false dichotomies and black-and-white thinking, looking instead for a third, fourth, or even fifth approach to a moral issue (see his discussion of factory farming below for an example). He’s considered the influence of emotions on moral judgement, the emotions provoked by bioethical issues such as human cloning, and the roles of empathy and ego in altruistic behavior. His longstanding interest in free will led to the 2022 co-edited volume Agency in Mental Disorder, which brings philosophical reasoning about limits and culpability to bear on addiction, mental illness, and psychotherapy.
May is also a “public philosopher,” an active contributor to popular debates on neurodiversity, veganism, and politics…
…What’s the best discovery you’ve made in your research?
False dichotomies are everywhere in ethics. Debates about factory farming focus on whether people should strictly omit all animal products from their diet (to go vegan or at least vegetarian) or just eat whatever they want. But I’ve argued, with my collaborator Victor Kumar, that there’s a distinct reducetarian path: most people should imperfectly reduce their consumption of animal products. The appropriate level of reduction all depends on the person and their circumstances. Similarly, does neuroscience show that we have free will or that it’s just an illusion? I think a careful look at the evidence suggests a third option: we have free will, but less than is commonly presumed. When it comes to neurological differences, like autism and ADHD, the false choice is between viewing them as either deficits or mere differences. But they can be one or the other (or both), depending on the person and their circumstances. The same goes for addiction: Is it a brain disease or a moral failing? I’ve argued for a neglected third route: it’s a disorder that nevertheless involves varying levels of control depending on the individual. Throughout moral and political debates, false dichotomies seem to dominate, but in my view, nuance should be the norm…
“Joshua May and the Search for Philosophical Nuance,” from @joshdmay.bsky.social and @jstordaily.bsky.social.
See also: “Stop the ‘good’ vs ‘bad’ snap judgments and watch your world become more interesting,” from @lorrainebesser.bsky.social (and source of the image at the top)
* “The mature person becomes able to differentiate feelings into as many nuances, strong and passionate experiences, or delicate and sensitive ones, as in the different passages of music in a symphony. Unfortunately, many of us have feelings limited like notes in a bugle call.” – Rollo May (no known relationship to Joshua)
###
As we distinguish details, we might recall that it was on this date in 1966 that the Roman Catholic Church announced, via a notification from the Congregation for the Doctrine of the Faith, the abolition of the Index Librorum Prohibitorum (“Index of Prohibited Books”), which was originally instituted in 1557. The communique stated that, while the Index maintained its moral force, in that it taught Christians to beware, as required by the natural law itself, of those writings that could endanger faith and morality, it no longer had the force of ecclesiastical positive law with the associated penalties. So… read on.
“‘When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.'”*…
Like today’s large language models, some 16th-century humanists (like Erasmus) had techniques to automate writing. But as Hannah Katznelson explains, others (like Rabelais) called foul…
The Renaissance scholar and educator Erasmus of Rotterdam opens his polemical treatise The Ciceronian (1528) by describing the utterly dysfunctional writing process of a character named Nosoponus. The Ciceronianis structured as a dialogue, withtwo mature writers, Bulephorus and Hypologus, trying to talk Nosoponus out of his paralysing obsession with stylistic perfection. Nosoponus explains that it would take him weeks of fruitless writing and rewriting to produce a casual letter in which he asks a friend to return some borrowed books. He says that writing requires such intense concentration that he can do it only at night, when no one else is awake to distract him, and even then his perfectionism is so intense that a single sentence becomes a full night’s work. Nosoponus goes over what he’s written again and again, but remains so dissatisfied with the quality of his language that eventually he just gives up.
Nosoponus’s problem might resonate. Who has not spent too long going over the wording of a simple email, at some point or another? Today there is an easy fix: we have large language models (LLMs) to write our letters for us, helpfully proffering suggestions as to what we might say, and how we might phrase it. When I input Nosoponus’s intended request into GPT-4, it generated the following almost instantly:
Hey [Friend’s Name],
Hope you’re doing well! I just realised I never got those books back that I lent you a while ago. No rush, but whenever you get a chance, I’d love to get them back. Let me know what works for you! Thanks!
Nosoponus
But there was a solution in the 16th century, too. A humanist education on the Erasmian model could train its students to produce letters of any length, on any topic – quickly, easily and eloquently. The French humanist François Rabelais, a contemporary of Erasmus, appears to have understood these compositional techniques as automating the creating of text in a way that, retrospectively, looks a lot like how LLMs function. If we want to understand LLMs, and what they are and aren’t capable of, we can look at earlier versions of the same technology – like Erasmian humanism. We can also read authors like Rabelais, who is already thinking about automatic text-generation along these lines, as someone who appreciates the effectiveness of Erasmian generative technology, but at the same time sees it as vitiating the social force of language and, ultimately, ruining language as a tool for moral and political life…
[Katznelson recounts Erasmus’s efforts, Rabelais’s response, and unpacks the important differences between our own authentic speech language created to speak for us and their practical and moral implications…]
What lessons from the 16th century can tell us about AI and LLMs: “Methodical banality,” from @aeon.co.
* Lewis Carroll, Through the Looking Glass
###
As we honor authenticity, we might recall that it was on this date in 1886 that three U.S. patents were issued to Alexander Graham Bell’s Volta Labs for “recording and reproducing speech and other sounds.” The Graphophone, was an improved (and the first practical) version of the Edison phonograph (from 1877), and became the foundation on which the speech recording (e.g., dictaphone) and recorded music (and spoken word) industries began to grow.
“The advance of genetic engineering makes it quite conceivable that we will begin to design our own evolutionary progress”*…
The obligations of a multi-day meeting (and the travel involved) mean that, from this issue, (R)D will be on pause until February 12 or 13 (depending on how connections play out…)
… and indeed the evolutionary progress of others species. But, Deputy Co-chair of the Nuffield Council on Bioethics Melanie Challenger asks, have we been sufficiently thoughful about the implications of this power?…
In 2016, Klaus Schwab announced that we had entered the Fourth Industrial Revolution. This is the era of the industrialization of biology, the leveraging of technologies to modify biological materials to meet human goals. While the first two Industrial Revolutions exploited energy and materials and the Third exploited digital information, the current revolution is a direct manipulation of life-forms and life’s substances.
The signature invention of this new era is CRISPR, dubbed “genetic scissors.” CRISPR is a ground-breaking method of making precise changes to DNA for a wide range of possible uses from disease reduction and elimination to the eradication of “pest” species and increases in the productivity of farmed animals. CRISPRs (the best-known system being CRISPR-Cas9) originate in RNA-based bacterial defense systems. Naturally occurring in species of bacteria, the Cas9 enzyme cuts the genomes of bacteriophages (viruses that will attack a bacterium), saving a record for defense against future infections. Scientists realized that this immunological strategy could be coopted to innovate a general tool for cutting DNA.
The optimism among those that seek to utilize these tools has been palpable for some time. As noted by the researchers at The Roslin Institute, creators of Dolly the Sheep, the world’s first cloned mammal: “Until recently, we have only been able to dream of…the ability to induce precise insertions or deletions easily and efficiently in the germline of livestock. With the advent of genome editors this is now possible.”
But the technologies of this new industrial era present ethical dilemmas and unknown consequences. What will it take to ensure that this revolution avoids worsening the enormous challenges we already face, especially from biodiversity loss and climate change? How can we get the balance right between the benefits and risks of human inventiveness?
In the 1980s, tech theorist David Collingridge presented his eponymous dilemma for those seeking to control potentially disruptive technologies. First, there is an “information problem” in which significant impacts are often invisible until the technology is already in use. Second, there is a “power problem” in which the technology becomes difficult to shape, regulate or scale back once it has become integrated in our lives. If we are going to navigate the Fourth Industrial Revolution successfully, we need to examine our use of CRISPR through the Collingridge dilemma.
The investors and engineers of the first industrial revolutions in the nineteenth century provide a vivid example of the information problem. They hoped that innovations like the combustion engine would unlock efficiency across multiple human sectors, from transportation to logistics to tourism. Such optimism was not unwarranted. Yet, as Collingridge’s dilemma suggests, it is easier to picture gains than to predict trouble. Building road systems and infrastructure carved capital movements into the landscape, symbolising freedom and the flow of wealth and creativity. Yet the striking visual parallels with our circulatory system did not stimulate anyone to forecast the ninety per cent of people today who are exposed to unsafe pollution levels from traffic or the associated health burdens from heart and lung disease to asthma. Nobody then foresaw the yearly deaths of two billion or so non-human vertebrates on our roads today, or that high traffic areas would cause localised declines in insect abundance of at least a quarter and, in some studies, as much as eighty per cent.
And, of course, most calamitous of all, there is climate change. Traffic emissions account for a fifth of all contributions to global warming. Yet the idea that a profitable and efficient machine like the combustion engine might precede devastating shifts in temperature and weather patterns was scarcely conceivable at the time. Now, it is a near ubiquitous feature of our understanding of the world.
When it comes to the engineering of biology, a similar information problem abounds. Not only is our understanding of biological life incomplete, but we know little about what the industrial processes that we are advancing inside the cells of organisms will do. The changes are both physically and ethically occluded. The ramifications of this and other related biotechnologies are not only rendered uncertain by the inherently complex nature of biological systems but are largely inaccessible to our imaginations.
We must struggle with the radical character of the industrialization of biology. Gene drives (a tool to increase the likelihood of passing on a gene) can weaponize the bodies and reproductive strategies of organisms to bias evolution in a directed way. Artificial chimeric organisms (those composed of cells from more than one species) mix and match biological traits and functions to bring about beings that wouldn’t occur otherwise, transforming autonomous organisms into useful parts for plug and play. But while evolutionary processes will sift those forms and strategies that most benefit future organisms, our acts of creation primarily benefit us alone. Survival of the fittest gives way to the contrivance of the functional.
Yet, despite the disruptive nature of these technologies, CRISPR is already entrenched in our research and economic landscape: here is the power problem of our new technology. The efficiency of modern versions of CRISPR has allowed the technology to pick up users fast. It is now a commonplace tool in labs around the world – with uses amplified during the pandemic – and continues to be utilized in ethically provocative trials, including the cloning of mammal species. CRISPR has been normalised by stealth.
This largely uncontested rollout has been enabled by biases in the evaluation of who is at risk. Put bluntly, humans worry about humans, and take risks to non-humans less seriously. As such, there are vastly different acceptance thresholds for certain kinds of uses and these can be exploited by those that seek to deregulate or profit from the technologies…
… This discrepancy is evident in the anxieties of Jennifer Doudna, one of the Nobel-winning scientists who made the CRISPR breakthrough. In her book, A Crack in Creation, she writes of a dream in which Hitler appears to her with the face of a pig and questions her excitedly about the power she has unleashed. Doudna’s anxieties relate not to the pigs of her dream (who are subject to a wide range of CRISPR applications) but to the potential of eugenics re-emerging in human societies. Her dream reflects not only the inevitability that any technology such as this will be equal parts destruction to rewards, but also that we must confront uncomfortable ideas about what it is to be a creature as much as a creator. Recognizing that these technologies work in the bodies of all biological beings, including humans, is a continual assault on the reasoning behind a hard moral border between us and them.
At present, the lives of non-human animals are the experimental landscape for our technologies. Their powerlessness to protest the uses of their bodies, wombs, physical materials, or futures leaves them vulnerable to being the test sites for a wide range of possible human applications. As a direct consequence of the serviceability of the bodies of organisms, CRISPR has been integrated into our world with little fanfare, directly facilitating the power problem that will, eventually, impact us too. Given Collingridge’s dilemma, what concepts and strategies could help us reduce the risks from CRISPR?
The first thing we need is a new definition of pollution. When it comes to combustion engines and other technologies of the first industrial revolutions, pollution is by far the most consequential harm. Direct impacts include the release of particulate matter or chemical compounds like nitrogen oxides or carbon dioxide into the atmosphere. Pollution from traffic has an immediate impact, especially fifty to one hundred metres from the roadside, with effects that we can measure, such as reduced growth rates or leaf damage in plants, or changes to soil chemistry and nutrient availability. On the other hand, long term effects of emissions, such as global warming, or the sustained impacts of waste on organisms and ecosystems, have proven tricky to anticipate and even harder to hold in mind…
…What is curious about the Fourth Industrial Revolution is that while several branches of science are arming us with the evidence that justifies an expansion of the moral circle to encompass a larger range of organisms, other branches are cranking up the objectification and exploitation of life-forms. As a result, there’s an obvious gap. Without addressing this, most concepts of pollution will remain anthropocentric. This may prove a critical misstep…
A provocative argument that “Gene Editing is Pollution,” from @TheIdeasLetter. Eminently worth reading in full.
See also: “The Ethics and Security Challenge of Gene Editing” and “The great gene editing debate: can it be safe and ethical?“
* Isaac Asimov
###
As we ponder permuted progeny, we might send microbiological birthday greetings to Jacques Lucien Monod; he was born on this date in 1910. A biochemist, he shared (with with François Jacob and André Lwoff) the Nobel Prize in Physiology or Medicine in 1965, “for their discoveries concerning genetic control of enzyme and virus synthesis.”
But Monod, who became the director of the Pasteur Institute, also made significant contributions to the philosophy of science– in particular via his 1971 book (based on a series of his lectures) Chance and Necessity, in which he examined the philosophical implications of modern biology. The importance of Monod’s work as a bridge between the chance and necessity of evolution and biochemistry on the one hand, and the human realm of choice and ethics on the other, can be seen in his influence on philosophers, biologists, and computer scientists including Daniel Dennett, Douglas Hofstadter, Marvin Minsky, and Richard Dawkins… and as a context setter for the deliberations suggested above…











You must be logged in to post a comment.