Posts Tagged ‘Ada Lovelace’
“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…
A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…
… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.
Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.
…
There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.
Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.
The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.
…
I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.
…
… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.
Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.
This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.
David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…
Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).
See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).
* Proverbs 17:28
###
As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

“Machines take me by surprise with great frequency”*…
In search of universals in the 17th century, Gottfried Leibniz imagined the calculus ratiocinator, a theoretical logical calculation framework aimed at universal application, that led Norbert Wiener to suggest that Leibniz should be considered the patron saint of cybernetics. In the 19th century, Charles Babbage and Ada Lovelace took a pair of whacks at making it real.
Ironically, it was confronting the impossibility of a universal calculator that led to modern computing. In 1936 (the same year that Charlie Chaplin released Modern Times) Alan Turing (following on Godel’s demonstration that mathematics is incomplete and addressing Hilbert‘s “decision problem,” querying the limits of computation) published the (notional) design of a “machine” that elegantly demonstrated those limits– and, as Sheon Han explains, birthed computing as we know it…
… [Hilbert’s] question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.
The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It’s abstract because it doesn’t (and can’t) physically exist as a tangible device. Instead, it’s a conceptual model of computation: If the machine can calculate a function, then the function is computable.
…
With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process — a set of instructions, which today we’d call an algorithm — that can always determine whether a given statement is true?…
… in 1936, Church and Turing — using different methods — independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway’s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.
…
Beyond answering these fundamental questions, Turing’s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today’s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture — called the von Neumann architecture — that made the universal Turing machine concept possible in a real-life machine…
As Turing said, “if a machine is expected to be infallible, it cannot also be intelligent.” On the importance of thought experiments: “The Most Important Machine That Was Never Built,” from @sheonhan in @QuantaMagazine.
* Alan Turing
###
As we sum it up, we might spare a thought for Martin Gardner; he died on this date in 2010. Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.
Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton). And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

“We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves”*…
Lee Wilkins on the interconnected development of digital and textile technology…
I’ve always been fascinated with the co-evolution of computation and textiles. Some of the first industrialized machines produced elaborate textiles on a mass scale, the most famous example of which is the jacquard loom. It used punch cards to create complex designs programmatically, similar to the computer punch cards that were used until the 1970s. But craft work and computation have many parallel processes. The process of pulling wires is similar to the way yarn is made, and silkscreening is common in both fabric and printed circuit board production. Another of my favorite examples is rubylith, a light-blocking film used to prepare silkscreens for fabric printing and to imprint designs on integrated circuits.
Of course, textiles and computation have diverged on their evolutionary paths, but I love finding the places where they do converge – or inventing them myself. Recently, I’ve had the opportunity to work with a gigantic Tajima digital embroidery machine [see above]. This room-sized machine, affectionately referred to as The Spider Queen by the technician, loudly sews hundreds of stitches per minute – something that would take me months to make by hand. I’m using it to make large soft speaker coils by laying conductive fibers on a thick woven substrate. I’m trying to recreate functional coils – for use as radios, speakers, inductive power, and motors – in textile form. Given the shared history, I can imagine a parallel universe where embroidery is considered high-tech and computers a crafty hobby…
Notes, in @the_prepared.
* Ada Lovelace, programmer of the Analytical Engine, which was designed and built by her partner Charles Babbage
###
As we investigate intertwining, we might recall that it was on this date in 1922 that Frederick Banting and Charles Best announced their discovery of insulin the prior year (with James Collip). The co-inventors sold the insulin patent to the University of Toronto for a mere $1. They wanted everyone who needed their medication to be able to afford it.
Today, Banting and his colleagues would be spinning in their graves: their drug, one on which many of the 30 million Americans with diabetes rely, has become the poster child for pharmaceutical price gouging.
The cost of the four most popular types of insulin has tripled over the past decade, and the out-of-pocket prescription costs patients now face have doubled. By 2016, the average price per month rose to $450 — and costs continue to rise, so much so that as many as one in four people with diabetes are now skimping on or skipping lifesaving doses…

Best (left) and Bantling with with one of the diabetic dogs used in their experiments with insulin
“Plans are worthless, but planning is everything”*…

We’re living through a real-time natural experiment on a global scale. The differential performance of countries, cities and regions in the face of the COVID-19 pandemic is a live test of the effectiveness, capacity and legitimacy of governments, leaders and social contracts.
The progression of the initial outbreak in different countries followed three main patterns. Countries like Singapore and Taiwan represented Pattern A, where (despite many connections to the original source of the outbreak in China) vigilant government action effectively cut off community transmission, keeping total cases and deaths low. China and South Korea represented Pattern B: an initial uncontrolled outbreak followed by draconian government interventions that succeeded in getting at least the first wave of the outbreak under control.
Pattern C is represented by countries like Italy and Iran, where waiting too long to lock down populations led to a short-term exponential growth of new cases that overwhelmed the healthcare system and resulted in a large number of deaths. In the United States, the lack of effective and universally applied social isolation mechanisms, as well as a fragmented healthcare system and a significant delay in rolling out mass virus testing, led to a replication of Pattern C, at least in densely populated places like New York City and Chicago.
Despite the Chinese and Americans blaming each other and crediting their own political system for successful responses, the course of the virus didn’t score easy political points on either side of the new Cold War. Regime type isn’t correlated with outcomes. Authoritarian and democratic countries are included in each of the three patterns of responses: authoritarian China and democratic South Korea had effective responses to a dramatic breakout; authoritarian Singapore and democratic Taiwan both managed to quarantine and contain the virus; authoritarian Iran and democratic Italy both experienced catastrophe.
It’s generally a mistake to make long-term forecasts in the midst of a hurricane, but some outlines of lasting shifts are emerging. First, a government or society’s capacity for technical competence in executing plans matters more than ideology or structure. The most effective arrangements for dealing with the pandemic have been found in countries that combine a participatory public culture of information sharing with operational experts competently executing decisions. Second, hyper-individualist views of privacy and other forms of risk are likely to be submerged as countries move to restrict personal freedoms and use personal data to manage public and aggregated social risks. Third, countries that are able to successfully take a longer view of planning and risk management will be at a significant advantage…
From Steve Weber and @nils_gilman, an argument for the importance of operational expertise, plans for the long-term, and the socialization of some risks: “The Long Shadow Of The Future.”
* Dwight D. Eisenhower
###
As we make ourselves ready, we might recall that it was on this date in 1822 that Charles Babbage [see almanac entry here] proposes a Difference Engine in a paper to the Royal Astronomical Society (which he’d helped found two years earlier).
In Babbage’s time, printed mathematical tables were calculated by human computers… in other words, by hand. They were central to navigation, science, and engineering, as well as mathematics– but mistakes occurred, both in transcription and in calculation. Babbage determined to mechanize the process and to reduce– indeed, to eliminate– errors. His Difference Engine was intended as precisely that sort of mechanical calculator (in this instance, to compute values of polynomial functions).
In 1833 he began his programmable Analytical Machine (AKA, the Analytical Engine), the forerunner of modern computers, with coding help from Ada Lovelace, who created an algorithm for the Analytical Machine to calculate a sequence of Bernoulli numbers— for which she is remembered as the first computer programmer.

A portion of the difference engine





You must be logged in to post a comment.