(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…

A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…

… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.

Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.

There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.

Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.

The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.

I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.

… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.

David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…

Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).

See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).

* Proverbs 17:28

###

As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

Daguerreotype by Antoine Claudet, c. 1843 (source)

“A proof tells us where to concentrate our doubts”*…

Andrew Granville at work

Number theorist Andrew Granville on what mathematics really is, on why objectivity is never quite within reach, and on the role that AI might play…

… What is a mathematical proof? We tend to think of it as a revelation of some eternal truth, but perhaps it is better understood as something of a social construct.

Andrew Granville, a mathematician at the University of Montreal, has been thinking about that a lot recently. After being contacted by a philosopher about some of his writing, “I got to thinking about how we arrive at our truths,” he said. “And once you start pushing at that door, you find it’s a vast subject.”

“How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

Quanta spoke with Granville about the nature of mathematical proof — from how proofs work in practice to popular misconceptions about them, to how proof-writing might evolve in the age of artificial intelligence…

[excerpts for that interview follow…]

How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

The culture of mathematics is all about proof. We sit around and think, and 95% of what we do is proof. A lot of the understanding we gain is from struggling with proofs and interpreting the issues that come up when we struggle with them…

The main point of a proof is to persuade the reader of the truth of an assertion. That means verification is key. The best verification system we have in mathematics is that lots of people look at a proof from different perspectives, and it fits well in a context that they know and believe. In some sense, we’re not saying we know it’s true. We’re saying we hope it’s correct, because lots of people have tried it from different perspectives. Proofs are accepted by these community standards.

Then there’s this notion of objectivity — of being sure that what is claimed is right, of feeling like you have an ultimate truth. But how can we know we’re being objective? It’s hard to take yourself out of the context in which you’ve made a statement — to have a perspective outside of the paradigm that has been put in place by society. This is just as true for scientific ideas as it is for anything else…

[Granville runs through a history of the proof, from Aristotle, through Euclid, to Hilbert, then Russel and Whitehead, ending with Gödel…]

To discuss mathematics, you need a language, and a set of rules to follow in that language. In the 1930s, Gödel proved that no matter how you select your language, there are always statements in that language that are true but that can’t be proved from your starting axioms. It’s actually more complicated than that, but still, you have this philosophical dilemma immediately: What is a true statement if you can’t justify it? It’s crazy.

So there’s a big mess. We are limited in what we can do.

Professional mathematicians largely ignore this. We focus on what’s doable. As Peter Sarnak likes to say, “We’re working people.” We get on and try to prove what we can…

[Granville then turns to computers…]

We’ve moved to a different place, where computers can do some wild things. Now people say, oh, we’ve got this computer, it can do things people can’t. But can it? Can it actually do things people can’t? Back in the 1950s, Alan Turing said that a computer is designed to do what humans can do, just faster. Not much has changed.

For decades, mathematicians have been using computers — to make calculations that can help guide their understanding, for instance. What AI can do that’s new is to verify what we believe to be true. Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification.

But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. So I’m not saying that I believe something like Lean is going to make a lot of errors. I’m just not sure it’s any more secure than most things done by humans…

Perhaps it could assist in creating a proof. Maybe in five years’ time, I’ll be saying to an AI model like ChatGPT, “I’m pretty sure I’ve seen this somewhere. Would you check it out?” And it’ll come back with a similar statement that’s correct.

And then once it gets very, very good at that, perhaps you could go one step further and say, “I don’t know how to do this, but is there anybody who’s done something like this?” Perhaps eventually an AI model could find skilled ways to search the literature to bring tools to bear that have been used elsewhere — in a way that a mathematician might not foresee.

However, I don’t understand how ChatGPT can go beyond a certain level to do proofs in a way that outstrips us. ChatGPT and other machine learning programs are not thinking. They are using word associations based on many examples. So it seems unlikely that they will transcend their training data. But if that were to happen, what will mathematicians do? So much of what we do is proof. If you take proofs away from us, I’m not sure who we become…

Eminently worth reading in full: “Why Mathematical Proof Is a Social Compact,” in @QuantaMagazine.

Morris Kline

###

As we add it up, we might send carefully calculated birthday greetings to Edward G. Begle; he was born on this date in 1914. A mathematician who was an accomplished topologist, he is best remembered for his role as the director of the School Mathematics Study Group (SMSG), the primary group credited for developing what came to be known as The New Math (a pedagogical response to Sputnik, taught in American grade schools from the late 1950s through the 1970s)… which will be well-known to (if not necessarily fondly recalled by) readers of a certain age.

source

“Our poetry is courage, audacity and revolt”*…

By Josh Millard for the Tabs Discord

One of your correspondent’s daily delights is Rusty Foster‘s Today in Tabs, a newsletter that informs and provokes as it, inevitably, amuses. Take for example this excerpt from Monday’s installment, subtitled “Today in Fascism”…

Could the end of the AI hype cycle be in sight?” asked TechBrew’s Patrick Kulp and precisely on time today here’s a doorstop of LinkedIn-brained crypto-(but-not-too-crypto)-fascism from Egg Andreessen titled “The Techno-Optimist Manifesto.” It’s very long, and you should absolutely not read it, but it’s useful for finally making explicit the fascist philosophy that people like Brad Johnson have long argued is growing steadily less implicit in Silicon Valley’s techno-triumphalism.

“Techno-Optimists believe that societies, like sharks, grow or die,” writes Egg, and Rose Eveleth was already like 🤔:

But before going fully mask-off, Andreessen has some crazy things to say about AI.

There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire.

But AI can surely help us kill the right people in war much more efficiently, yes? Still, he needs to make a pseudo-moral case to keep pumping cash into the AI bubble, so we get this:

We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.

Got that, Untermenschen? Regulation == murder. [Followed by the photo at the top]

But let’s get to the good stuff, in the section titled “Becoming Technological Supermen” (I swear I’m not making this up).

We believe in the romance of technology, of industry. The eros of the train, the car, the electric light, the skyscraper. And the microchip, the neural network, the rocket, the split atom.

We believe in adventure. Undertaking the Hero’s Journey, rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community.

To paraphrase a manifesto of a different time and place: “Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”

The first two paragraphs here are just bonkers. He’s horny for trains? I guess he saw North By Northwest at an impressionable age. But that last paragraph contains the only quote in the whole piece that isn’t attributed to a specific source, and it turns out it’s not really a paraphrase, it’s a direct quote from Filippo Marinetti’s 1909 “Futurist Manifesto” with “technology” substituted for the original’s “poetry.” I wonder if Marinetti wrote any other famous manifestos?

In case we somehow still don’t get it, Andreessen specifies that “The Enemy” is “the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable…” and then drops an extended Nietzsche excerpt. You know who else hated the ivory tower and loved Nietzsche?…

Industrial Society and Its Future (Are Gonna Be Great!),” from @rusty.todayintabs.com. Do yourself the favor of subscribing to Today in Tabs— it’s marvelous.

* Filippo Tommaso Marinetti, Manifesti Futuristi

###

As we reprioritize prudence, we might recall that it was on this date in 1896 that Richard F. Outcault‘s comic strip Hogan’s Alley— featuring “the Yellow Kid” (Mickey Dugan)– debuted in William Randolph Hearst’s New York Journal as a regular feature. While “the Yellow Kid” had appeared irregularly before, it was the first full-color comic to be printed regularly (many historians suggest), and one of the earliest in the history of the comic; Outcault’s use of word balloons in the Yellow Kid influenced the basic appearance and use of balloons in subsequent newspaper comic strips and comic books. Outcault’s work aimed at humor and social commentary; but (perhaps ironically) the concept of “yellow journalism” referred to stories which were sensationalized for the sake of selling papers (as in the publications of Hearst and Joseph Pulitzer, an earlier home to sporadic appearances of the Yellow Kid) and was so named after the “Yellow Kid” cartoons.

source

“Bureaucracy defends the status quo long past the time when the quo has lost its status”*…

… which is one of the reasons that they’re hard to update. Kevin Baker describes a 1998 visit to the IRS Atlanta Service Center and ponders its lessons…

… the first thing you’d notice would be the wires. They ran everywhere, and the building obviously hadn’t been constructed with them in mind. As you walked down a corridor, passing carts full of paper returns and rows of “tingle tables,” you would tread over those wires on a raised metal gangway. Each work area had an off-ramp, where both the wires and people would disembark…

… The desks were covered with dot matrix paper, cartons of files, and Sperry terminals glowing a dull monochromatic glow. These computers were linked to a mainframe in another room. Magnetic tapes from that mainframe, and from mainframes all over the country, would be airlifted to National Airport in Washington DC. From there, they’d be put on trucks to a West Virginia town of about 14,000 people called Martinsburg. There, they’d be loaded into a machine, the first version of which was known colloquially—and not entirely affectionately—as the “Martinsburg Monster.” This computer amounted to something like a national nerve center for the IRS. On it programs called the Individual Master File and the Business Master File processed the country’s tax records. These programs also organized much of the work. If there were a problem at Martinsburg, work across the IRS’s offices spanning the continent could and frequently did shut down.

Despite decades of attempts to kill it, The IRS’s Individual Master File, an almost sixty-year old accumulation of government Assembly Language, lives on. Part of this strange persistence can be pegged squarely on Congress’s well-documented history of starving the IRS for funding. But another part of it is that the Individual Master File has become so completely entangled in the life of the agency that modernizing it resembles delicate surgery more than a straightforward software upgrade. Job descriptions, work processes, collective bargaining agreements, administrative law, and technical infrastructure all coalesce together and interface with it, so that a seemingly technical task requires considerable sociological, historical, legal, and political knowledge.

In 2023, as it was in the 1980s, the IRS is a cyborg bureaucracy, an entangled mass of law, hardware, software, and clerical labor. It was among the first government agencies to embrace automatic data processing and large-scale digital computing. And it used these technologies to organize work, to make decisions, and to understand itself. In important ways, the lines between the digital shadow of the agency—its artificial bureaucracy—and its physical presence became difficult if not impossible to disentangle….

Baker is launching a new Substack, devoted to exploring precisely this kind tangle– and what it might portend…

This series, called Artificial Bureaucracy, is a long-term project looking at the history of government computing in the fifty-year period between 1945-1995. I think this is a timely subject. In the past several years, promoters and critics of artificial intelligence alike have talked up the possibility that decision-making and even governance itself may soon be handed over to sophisticated AI systems. What draws together both the dreams of boosters and the nightmares of critics is a deterministic orientation towards the future of technology, a conception of technology as autonomous and somehow beyond the possibility of control.

These visions mostly ignore the fact that the computerization of governance is a project at least seventy years in the making, and that project has never been determined, in the first instance or the last, primarily by “technological” factors. Like everything in government, the hardware and software systems that make up its artificial bureaucracy were and are subject to negotiation, conflict, administrative inertia, and the individual agency of its users.

Looking at government computing can also tell us something about AI. The historian of computing, Michael Mahoney has argued that studying the history of software is the process of learning how groups of people came to put their worlds in a machine. If this is right—and I think it is—our conceptions of “artificial intelligence” have an unwarranted individualistic bias; the proper way to understand machine intelligence isn’t by analogy to individual human knowledge and decision-making, but to methods of bureaucratic knowledge and action. If it is about anything, the story of AI is the story of bureaucracy. And if the future of governance is AI, then it makes sense to know something about its past…

Is bureaucracy the future of AI? Check it out the first post in Artificial Bureaucracy, from @kevinbaker@mastodon.social.

* Laurence J. Peter

###

As we size up systems, we might recall that it was on this date in 1935 that President Franklin D. Roosevelt signed the Social Security Act. A key component of Roosevelt’s New Deal domestic program, the Act created both the Social Security program and insurance against unemployment

Roosevelt signs Social Security Bill (source)

“These are the forgeries of jealousy”*…

Analysis of Leonardo da Vinci’s Salvator Mundi required dividing a high-resolution image of the complete painting into a set of overlapping square tiles. But only those tiles that contained sufficient visual information, such as the ones outlined here, were input to the author’s neural-network classifier.

Is it authentic? Attorney and AI practitioner Steven J. Frank, working with his wife, art historian and curator Andrea Frank (together, Art Eye-D Associates), brings machine learning to bear…

The sound must have been deafening—all those champagne corks popping at Christie’s, the British auction house, on 15 November 2017. A portrait of Jesus, known as Salvator Mundi (Latin for “savior of the world”), had just sold at Christie’s in New York for US $450.3 million, making it by far the most expensive painting ever to change hands.

But even as the gavel fell, a persistent chorus of doubters voiced skepticism. Was it really painted by Leonardo da Vinci, the towering Renaissance master, as a panel of experts had determined six years earlier? A little over 50 years before that, a Louisiana man had purchased the painting in London for a mere £45. And prior to the rediscovery of Salvator Mundi, no Leonardo painting had been uncovered since 1909.

Some of the doubting experts questioned the work’s provenance—the historical record of sales and transfers—and noted that the heavily damaged painting had undergone extensive restoration. Others saw the hand of one of Leonardo’s many protégés rather than the work of the master himself.

Is it possible to establish the authenticity of a work of art amid conflicting expert opinions and incomplete evidence? Scientific measurements can establish a painting’s age and reveal subsurface detail, but they can’t directly identify its creator. That reLeonardo da quires subtle judgments of style and technique, which, it might seem, only art experts could provide. In fact, this task is well suited to computer analysis, particularly by neural networks—computer algorithms that excel at examining patterns. Convolutional neural networks (CNNs), designed to analyze images, have been used to good advantage in a wide range of applications, including recognizing faces and helping to pilot self-driving cars. Why not also use them to authenticate art?

That’s what I asked my wife, Andrea M. Frank, a professional curator of art images, in 2018. Although I have spent most of my career working as an intellectual-property attorney, my addiction to online education had recently culminated in a graduate certificate in artificial intelligence from Columbia University. Andrea was contemplating retirement. So together we took on this new challenge…

With millions at stake, deep learning enters the art world. The fascinating story: “This AI Can Spot an Art Forgery,” @ArtAEye in @IEEESpectrum.

* Shakespeare (Titania, A Midsummer Night’s Dream, Act II, Scene 1)

###

As we honor authenticity, we might spare a thought for a champion of authenticity in a different sense, Joris Hoefnagel; he died on this date in 1601. A Flemish painter, printmaker, miniaturist, draftsman, and merchant, he is noted for his illustrations of natural history subjects, topographical views, illuminations (he was one of the last manuscript illuminators), and mythological works.

Hoefnagel made a major contribution to the development of topographical drawing. But perhaps more impactfully, his manuscript illuminations and ornamental designs played an important role in the emergence of floral still-life painting as an independent genre in northern Europe at the end of the 16th century. The almost scientific naturalism of his botanical and animal drawings served as a model for a later generation of Netherlandish artists.  Through these nature studies he also contributed to the development of natural history and he was thus a founder of proto-scientific inquiry.

Portrait of Joris Hoefnagel, engraving by Jan Sadeler, 1592 (source)