Posts Tagged ‘Charles Babbage’
“I’ve been discovering, much to my dismay, that I’m not a criminal mastermind or anything. I’m just brute force and my powers in no way include super-intelligence, which kind of pisses me off.”*…
How do we accomodate ourselves to the prospect of an intelligence far greater than our own? In a consideration of J.D. Beresford’s The Hampdenshire Wonder (the first recognized appearance of the concept in modern Englis-language literature), Ted Chiang unspools the intellectual and cultural history of this now-prevalant trope…
J.D. Beresford’s The Hampdenshire Wonder is generally considered to be the first fictional treatment of superhuman intelligence, or “superintelligence.” This is a familiar trope for readers of science fiction today, but when the novel was originally published in 1911 it was anything but. What intellectual soil needed to be tilled before this idea could sprout?
At least since Plato, Western thought has clung to the idea of a Great Chain of Being, also known as the scala naturae, a system of classification in which plants rank below animals; humans rank above animals but below angels; and angels rank above humans but below God. There was no implied movement to this hierarchy; no one expected that plants would turn into animals given enough time, or that humans would turn into angels.
But by the 1800s, naturalists like Lamarck were questioning the assumption that species were immutable; they suggested that over time organisms actually grew more complex, with the human species as the pinnacle of the process. Darwin brought these speculations into public consciousness in 1859 with On the Origin of Species, and while he emphasized that evolution branches in many directions without any predetermined goal in mind, most people came to think of evolution as a linear progression.
Only then, I think, was it possible to conceive of humanity as a point on a line that could keep extending, to imagine something that would be more than human without being supernatural.
Darwin’s half-cousin, Francis Galton, was the first to suggest the idea that mental attributes like intelligence could be quantified. Galton published a volume called Hereditary Genius in 1869, and during the 1880s and ’90s he measured people’s reaction times as a way of gauging their mental ability, pioneering what we now call the field of psychometrics. By 1905, Alfred Binet had introduced a questionnaire to measure children’s intelligence; such questionnaires would evolve into IQ tests. The validity of psychometrics is quite controversial nowadays, as people disagree about what “intelligence” means and to what extent it can be measured. Some modern cognitive scientists do not consider the term intelligence particularly useful, instead preferring to use more specific terms like executive function, attentional control, or theory of mind. In the future “intelligence” may be regarded as a historical curiosity, like phlogiston, but until we develop a more precise vocabulary, we continue to use the term. Our contemporary notion of intelligence first gained currency around the time that Beresford was writing, and one can see how that converged with the idea of the superhuman in The Hampdenshire Wonder.
The titular character of The Hampdenshire Wonder is a boy named Victor Stott…
… Victor is born with an enormous head but an ordinary body, which disappoints his athletic father but also points to certain assumptions we have about the relationship between the mental and the physical. Beresford could have made Victor both an athlete and a genius, but he opted instead to follow a trope perhaps originated by Wells: the idea that evolution is pushing humanity toward a giant-brained phenotype, which is itself implicitly premised on the idea that mental ability and physical ability are in opposition to one another. This has remained a common trope in science fiction, although there are occasional depictions of mental and physical ability going hand in hand…
[Chiang traces the development of the “superintelligence,” the problems it raises, and the ways that they are treated in The Hampdenshire Wonder and elsewhere– “whatever your wisdom, you have to live in a world of comparative ignorance, a world which cannot appreciate you, but which can and will fall back upon the compelling power of the savage—the resort to physical, brute force.”…]
… In 1993 [Vernor] Vinge [here] argued that progress in computer technology would inevitably lead to a machine form of superintelligence. He proposed the term “the singularity” to describe the date—in the next few decades—beyond which events would be impossible to imagine. Since then, the technological singularity has largely replaced biological superintelligence as a trope in science fiction. More than that, it has become a trope in the Silicon Valley tech industry, giving rise to a discourse that is positively eschatological in tone. Superintelligence lies on the other side of a conceptual event horizon. When considered as a purely fictional idea, it imposes a limit on the kind of narratives one can tell about it. But when you start imagining it as something that could exist in reality, it becomes an end to human narratives altogether.
The Hampdenshire Wonder does posit a kind of eschatological scenario, but of a completely different order. After Victor’s downfall, Challis recounts the conclusion he came to after a conversation he’d had with the child, revealing a profound terror about the finiteness of knowledge:
Don’t you see that ignorance is the means of our intellectual pleasure? It is the solving of the problem that brings enjoyment—the solved problem has no further interest. So when all is known, the stimulus for action ceases; when all is known there is quiescence, nothingness. Perfect knowledge implies the peace of death…
… The idea that the search for understanding will inevitably lead to a kind of cognitive heat death is an interesting one. I don’t believe it and I doubt any scientist believes it, so it’s curious that Beresford—clearly an admirer of scientists—apparently did. Challis talks about the need for mysteries that elude explanation, which is a surprisingly anti-intellectual stance to find in a novel about superintelligence. While there is arguably a strain of anti-intellectualism in stories where superintelligent characters bring about their own downfall, those can just as easily be understood as warnings about hubris, a literary device employed as far back as the first recorded literature, “The Epic of Gilgamesh.” But The Hampdenshire Wonder, in its final pages, is making an altogether different claim: The pursuit of knowledge itself is ultimately self-defeating.
Nowadays we associate the word “prodigy” with precocious children, but in centuries past the word was used to describe anything monstrous. Victor Stott clearly qualifies as a prodigy in the modern sense, but he qualifies in the older sense too: Not only does he frighten the ignorant and superstitious, he induces a profound terror in the educated and intellectual. Seen in this light, the first novel about superintelligence is actually a work of horror SF, a cautionary tale about the dangers of knowing too much…
Superintelligence and its discontents, from @ted-chiang.bsky.social in @literaryhub.bsky.social.
Another powerful (and not unrelated) piece from Chiang: “Will A.I. Become the New McKinsey?“
* Kelly Thompson, The Girl Who Would Be King
###
As we wrestle with reason, we might wish a Joyeux Anniversaire to silk weaver Joseph Marie Jacquard; he was born on this date in 1752. Jacquard’s 1805 invention of the programmable power loom, controlled by a series of punched “instruction” cards and capable of weaving essentially any pattern, ignited a technological revolution in the textile industry… indeed, it set off a chain of revolutions: it inspired Charles Babbage in the design of his “Difference Engine” (the ur-computer), and later, Herman Hollerith, who used punched cards in the “tabulator” that he created for the 1890 Census… and in so doing, pioneered the use of those cards for computer input… which is to say that Jacquard helped create the preconditions for AI (among all of the other things that computers can do).

“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…
A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…
… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.
Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.
…
There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.
Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.
The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.
…
I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.
…
… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.
Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.
This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.
David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…
Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).
See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).
* Proverbs 17:28
###
As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

“No problem can be solved from the same level of consciousness that created it”*…
… perhaps especially not the problem of consciousness itself. At least for now…
A 25-year science wager has come to an end. In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest — and declared Chalmers the winner.
What ultimately helped to settle the bet was a key study testing two leading hypotheses about the neural basis of consciousness, whose findings were unveiled at the conference.
“It was always a relatively good bet for me and a bold bet for Christof,” says Chalmers, who is now co-director of the Center for Mind, Brain and Consciousness at New York University. But he also says this isn’t the end of the story, and that an answer will come eventually: “There’s been a lot of progress in the field.”
Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.
Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”…
Neuroscientist Christof Koch wagered philosopher David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues: “Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0,” from @Nature. Eminently worth reading in full for background and state-of-play.
* Albert Einstein
###
As we ponder pondering, we might spare a thought for Vannevar Bush; he died on this date in 1974. An engineer, inventor, and science administrator, he headed the World War II U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.
Bush also did his own work. Before the war, in 1925, at age 35, he developed the differential analyzer, the world’s first analog computer, capable of solving differential equations. It put into productive form, the mechanical concept left incomplete by Charles Babbage, 50 years earlier; and theoretical work by Lord Kelvin. The machine filled a 20×30 ft room. He seeded ideas later adopted as internet hypertext links.
“We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves”*…
Lee Wilkins on the interconnected development of digital and textile technology…
I’ve always been fascinated with the co-evolution of computation and textiles. Some of the first industrialized machines produced elaborate textiles on a mass scale, the most famous example of which is the jacquard loom. It used punch cards to create complex designs programmatically, similar to the computer punch cards that were used until the 1970s. But craft work and computation have many parallel processes. The process of pulling wires is similar to the way yarn is made, and silkscreening is common in both fabric and printed circuit board production. Another of my favorite examples is rubylith, a light-blocking film used to prepare silkscreens for fabric printing and to imprint designs on integrated circuits.
Of course, textiles and computation have diverged on their evolutionary paths, but I love finding the places where they do converge – or inventing them myself. Recently, I’ve had the opportunity to work with a gigantic Tajima digital embroidery machine [see above]. This room-sized machine, affectionately referred to as The Spider Queen by the technician, loudly sews hundreds of stitches per minute – something that would take me months to make by hand. I’m using it to make large soft speaker coils by laying conductive fibers on a thick woven substrate. I’m trying to recreate functional coils – for use as radios, speakers, inductive power, and motors – in textile form. Given the shared history, I can imagine a parallel universe where embroidery is considered high-tech and computers a crafty hobby…
Notes, in @the_prepared.
* Ada Lovelace, programmer of the Analytical Engine, which was designed and built by her partner Charles Babbage
###
As we investigate intertwining, we might recall that it was on this date in 1922 that Frederick Banting and Charles Best announced their discovery of insulin the prior year (with James Collip). The co-inventors sold the insulin patent to the University of Toronto for a mere $1. They wanted everyone who needed their medication to be able to afford it.
Today, Banting and his colleagues would be spinning in their graves: their drug, one on which many of the 30 million Americans with diabetes rely, has become the poster child for pharmaceutical price gouging.
The cost of the four most popular types of insulin has tripled over the past decade, and the out-of-pocket prescription costs patients now face have doubled. By 2016, the average price per month rose to $450 — and costs continue to rise, so much so that as many as one in four people with diabetes are now skimping on or skipping lifesaving doses…

Best (left) and Bantling with with one of the diabetic dogs used in their experiments with insulin
“The future is already here – it’s just not evenly distributed”*…

Security, transportation, energy, personal “stuff”– the 2018 staff of Popular Mechanics, asked leading engineers and futurists for their visions of future cities, and built a handbook to navigate this new world: “The World of 2045.”
* William Gibson (in The Economist, December 4, 2003)
###
As we take the long view, we might spare a thought for Charles Babbage; he died on this date in 1871. A mathematician, philosopher, inventor, and mechanical engineer, Babbage is best remembered for originating the concept of a programmable computer. Anxious to eliminate inaccuracies in mathematical tables, he first built a small calculating machine able to compute squares. He then produced prototypes of portions of a larger Difference Engine. (Georg and Edvard Schuetz later constructed the first working devices to the same design, and found them successful in limited applications.) In 1833 he began his programmable Analytical Machine (AKA, the Analytical Engine), the forerunner of modern computers, with coding help from Ada Lovelace, who created an algorithm for the Analytical Machine to calculate a sequence of Bernoulli numbers— for which she is remembered as the first computer programmer.
Babbage’s other inventions include the cowcatcher, the dynamometer, the standard railroad gauge, uniform postal rates, occulting lights for lighthouses, Greenwich time signals, and the heliograph opthalmoscope. A true hacker, he was also passionate about cyphers and lock-picking.





You must be logged in to post a comment.