(Roughly) Daily

Posts Tagged ‘AI

“What we need is the celestial fire to change the flint into the transparent crystal, bright and clear”*…

… or so it used to be. Scientists at Google DeepMind and the Lawrence Berkeley National Laboratory have applied AI to the task– with encouraging results…

Modern technologies from computer chips and batteries to solar panels rely on inorganic crystals. To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation.

… in a paper published in Nature, we share the discovery of 2.2 million new crystals – equivalent to nearly 800 years’ worth of knowledge. We introduce Graph Networks for Materials Exploration (GNoME), our new deep learning tool that dramatically increases the speed and efficiency of discovery by predicting the stability of new materials.

With GNoME, we’ve multiplied the number of technologically viable materials known to humanity. Of its 2.2 million predictions, 380,000 are the most stable, making them promising candidates for experimental synthesis. Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.

GNoME shows the potential of using AI to discover and develop new materials at scale. External researchers in labs around the world have independently created 736 of these new structures experimentally in concurrent work. In partnership with Google DeepMind, a team of researchers at the Lawrence Berkeley National Laboratory has also published a second paper in Nature that shows how our AI predictions can be leveraged for autonomous material synthesis.

We’ve made GNoME’s predictions available to the research community. We will be contributing 380,000 materials that we predict to be stable to the Materials Project, which is now processing the compounds and adding them into its online database. We hope these resources will drive forward research into inorganic crystals, and unlock the promise of machine learning tools as guides for experimentation…

GNoME suggests that materials science may be the next frontier to be turbocharged by artificial intelligence (see this earlier example from biotech): “Millions of new materials discovered with deep learning.”

* Henry Wadsworth Longfellow

###

As we drive discovery, we might recall that it was on this date in 1942 that a team of scientists led by Enrico Fermi, working inside an enormous tent on a squash court under the stands of the University of Chicago’s Stagg Field, achieved the first controlled nuclear fission chain reaction… laying the foundation for the atomic bomb and later, nuclear power generation– that’s to say, inaugurating the Atomic Age.

“…the Italian Navigator has just landed in the New World…”
– Coded telephone message confirming first self-sustaining nuclear chain reaction, December 2, 1942.

Illustration depicting the scene on Dec. 2, 1942 (Photo copyright of Chicago Historical Society) source

Indeed, exactly 15 years later, on this date in 1957, the world’s first full-scale atomic electric power plant devoted exclusively to peacetime uses, the Shippingport Atomic Power Station, reached criticality; the first power was produced 16 days later, after engineers integrated the generator into the distribution grid of Duquesne Light Company.

 source

“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…

A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…

… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.

Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.

There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.

Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.

The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.

I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.

… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.

David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…

Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).

See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).

* Proverbs 17:28

###

As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

Daguerreotype by Antoine Claudet, c. 1843 (source)

“A proof tells us where to concentrate our doubts”*…

Andrew Granville at work

Number theorist Andrew Granville on what mathematics really is, on why objectivity is never quite within reach, and on the role that AI might play…

… What is a mathematical proof? We tend to think of it as a revelation of some eternal truth, but perhaps it is better understood as something of a social construct.

Andrew Granville, a mathematician at the University of Montreal, has been thinking about that a lot recently. After being contacted by a philosopher about some of his writing, “I got to thinking about how we arrive at our truths,” he said. “And once you start pushing at that door, you find it’s a vast subject.”

“How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

Quanta spoke with Granville about the nature of mathematical proof — from how proofs work in practice to popular misconceptions about them, to how proof-writing might evolve in the age of artificial intelligence…

[excerpts for that interview follow…]

How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

The culture of mathematics is all about proof. We sit around and think, and 95% of what we do is proof. A lot of the understanding we gain is from struggling with proofs and interpreting the issues that come up when we struggle with them…

The main point of a proof is to persuade the reader of the truth of an assertion. That means verification is key. The best verification system we have in mathematics is that lots of people look at a proof from different perspectives, and it fits well in a context that they know and believe. In some sense, we’re not saying we know it’s true. We’re saying we hope it’s correct, because lots of people have tried it from different perspectives. Proofs are accepted by these community standards.

Then there’s this notion of objectivity — of being sure that what is claimed is right, of feeling like you have an ultimate truth. But how can we know we’re being objective? It’s hard to take yourself out of the context in which you’ve made a statement — to have a perspective outside of the paradigm that has been put in place by society. This is just as true for scientific ideas as it is for anything else…

[Granville runs through a history of the proof, from Aristotle, through Euclid, to Hilbert, then Russel and Whitehead, ending with Gödel…]

To discuss mathematics, you need a language, and a set of rules to follow in that language. In the 1930s, Gödel proved that no matter how you select your language, there are always statements in that language that are true but that can’t be proved from your starting axioms. It’s actually more complicated than that, but still, you have this philosophical dilemma immediately: What is a true statement if you can’t justify it? It’s crazy.

So there’s a big mess. We are limited in what we can do.

Professional mathematicians largely ignore this. We focus on what’s doable. As Peter Sarnak likes to say, “We’re working people.” We get on and try to prove what we can…

[Granville then turns to computers…]

We’ve moved to a different place, where computers can do some wild things. Now people say, oh, we’ve got this computer, it can do things people can’t. But can it? Can it actually do things people can’t? Back in the 1950s, Alan Turing said that a computer is designed to do what humans can do, just faster. Not much has changed.

For decades, mathematicians have been using computers — to make calculations that can help guide their understanding, for instance. What AI can do that’s new is to verify what we believe to be true. Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification.

But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. So I’m not saying that I believe something like Lean is going to make a lot of errors. I’m just not sure it’s any more secure than most things done by humans…

Perhaps it could assist in creating a proof. Maybe in five years’ time, I’ll be saying to an AI model like ChatGPT, “I’m pretty sure I’ve seen this somewhere. Would you check it out?” And it’ll come back with a similar statement that’s correct.

And then once it gets very, very good at that, perhaps you could go one step further and say, “I don’t know how to do this, but is there anybody who’s done something like this?” Perhaps eventually an AI model could find skilled ways to search the literature to bring tools to bear that have been used elsewhere — in a way that a mathematician might not foresee.

However, I don’t understand how ChatGPT can go beyond a certain level to do proofs in a way that outstrips us. ChatGPT and other machine learning programs are not thinking. They are using word associations based on many examples. So it seems unlikely that they will transcend their training data. But if that were to happen, what will mathematicians do? So much of what we do is proof. If you take proofs away from us, I’m not sure who we become…

Eminently worth reading in full: “Why Mathematical Proof Is a Social Compact,” in @QuantaMagazine.

Morris Kline

###

As we add it up, we might send carefully calculated birthday greetings to Edward G. Begle; he was born on this date in 1914. A mathematician who was an accomplished topologist, he is best remembered for his role as the director of the School Mathematics Study Group (SMSG), the primary group credited for developing what came to be known as The New Math (a pedagogical response to Sputnik, taught in American grade schools from the late 1950s through the 1970s)… which will be well-known to (if not necessarily fondly recalled by) readers of a certain age.

source

“Humanity is acquiring all the right technology for all the wrong reasons”*…

Further to yesterday’s post on the poverty created by manufacturing displacement, and in the wake of the sturm und drang occasioned by the coup at OpenAI, the estimable Rana Foroohar on the politics of AI…

… Consider that current politics in the developed world — from the rise of Donald Trump to the growth of far right and far left politics in Europe — stem in large part from disruptions to the industrial workforce due to technology and globalisation. The hollowing out of manufacturing work led to more populist and fractious politics, as countries tried (and often failed) to balance the needs of the global marketplace with those of voters.

Now consider that this past summer, the OECD warned that white-collar, skilled labour representing about a third of the workforce in the US and other rich countries is most at risk from disruption by AI. We are already seeing this happen in office work — with women and Asians particularly at risk since they hold a disproportionate amount of roles in question. As our colleague John Burn-Murdoch has charted [image above], online freelancers are especially vulnerable.

So, what happens when you add more than three times as many workers, in new subgroups, to the cauldron of angry white men that have seen their jobs automated or outsourced in recent decades? Nothing good. I’m always struck when CEOs like Elon Musk proclaim that we are headed towards a world without work as if this is a good thing. As academics like Angus Deaton and Anne Case have laid out for some time now, a world without work very often leads to “deaths of despair,” broken families, and all sorts of social and political ills.

Now, to be fair, Goldman Sachs has estimated that the productivity impact of AI could double the recent rate — mirroring the impact of the PC revolution. This would lead to major growth which could, if widely shared, do everything from cut child poverty to reduce our burgeoning deficit.

But that’s only if it’s shared. And the historical trend lines for technology aren’t good in that sense — technology often widens wealth disparities before labour movements and government regulation equalise things. (Think about the turn of the 20th century, up until the 1930s). But the depth and breadth of AI disruption may well cause unprecedented levels of global labour displacement and political unrest.

I am getting more and more worried that this is where we may be heading. Consider this new National Bureau of Economic Research working paper, which analyses why AI will be as transformative as the industrial revolution. It also predicts, however, that there is a very good chance that it lowers the labour share radically, even pushing it to zero, in lieu of policies that prevent this (the wonderful Daron Acemoglu and Simon Johnson make similar points, and lay out the history of such tech transformation in their book Power and Progress

We can’t educate ourselves out of this problem fast enough (or perhaps at all). We also can’t count on universal basic income to fix everything, no matter how generous it could be, because people simply need work to function (as Freud said, it’s all about work and love). Economists and political scientists have been pondering the existential risks of AI — from nuclear war to a pandemic — for years. But I wonder if the real existential crisis isn’t a massive crisis of meaning, and the resulting politics of despair, as work is displaced faster than we can fix the problem…

Everyone’s worried about AI, but are we worried about the right thing? “The politics of AI,” from @RanaForoohar in @FT.

See also: Henry Farrell‘s “What OpenAI shares with Scientology” (“strange beliefs, fights over money, and bad science fiction”) and Dave Karpf‘s “On OpenAI: Let Them Fight.” (“It’s chaos… And that’s a good thing.”)

For a different point-of-view, see: “OpenAI and the Biggest Threat in the History of Humanity,” from Tomás Pueyo.

And for deep background, read Benjamin Labatut‘s remarkable The MANIAC.

* R. Buckminster Fuller

###

As we equilibrate, we might recall that it was on this date in 1874 that electrical engineer, inventor, and physicist Ferdinand Braun published a paper in the Annalen der Physik und Chemie describing his discovery of the electrical rectifier effect, the original practical semiconductor device.

(Braun is better known for his contributions to the development of radio and television technology: he shared the 1909 Nobel Prize in Physics with Guglielmo Marconi “for their contributions to the development of wireless telegraphy” (Braun invented the crystal tuner and the phased-array antenna); was a founder of Telefunken, one of the pioneering communications and television companies; and (as the builder of the first cathode ray tube) has been called the “father of television” (shared with inventors like Paul Gottlieb Nipkow).

source

“Our poetry is courage, audacity and revolt”*…

By Josh Millard for the Tabs Discord

One of your correspondent’s daily delights is Rusty Foster‘s Today in Tabs, a newsletter that informs and provokes as it, inevitably, amuses. Take for example this excerpt from Monday’s installment, subtitled “Today in Fascism”…

Could the end of the AI hype cycle be in sight?” asked TechBrew’s Patrick Kulp and precisely on time today here’s a doorstop of LinkedIn-brained crypto-(but-not-too-crypto)-fascism from Egg Andreessen titled “The Techno-Optimist Manifesto.” It’s very long, and you should absolutely not read it, but it’s useful for finally making explicit the fascist philosophy that people like Brad Johnson have long argued is growing steadily less implicit in Silicon Valley’s techno-triumphalism.

“Techno-Optimists believe that societies, like sharks, grow or die,” writes Egg, and Rose Eveleth was already like 🤔:

But before going fully mask-off, Andreessen has some crazy things to say about AI.

There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire.

But AI can surely help us kill the right people in war much more efficiently, yes? Still, he needs to make a pseudo-moral case to keep pumping cash into the AI bubble, so we get this:

We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.

Got that, Untermenschen? Regulation == murder. [Followed by the photo at the top]

But let’s get to the good stuff, in the section titled “Becoming Technological Supermen” (I swear I’m not making this up).

We believe in the romance of technology, of industry. The eros of the train, the car, the electric light, the skyscraper. And the microchip, the neural network, the rocket, the split atom.

We believe in adventure. Undertaking the Hero’s Journey, rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community.

To paraphrase a manifesto of a different time and place: “Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”

The first two paragraphs here are just bonkers. He’s horny for trains? I guess he saw North By Northwest at an impressionable age. But that last paragraph contains the only quote in the whole piece that isn’t attributed to a specific source, and it turns out it’s not really a paraphrase, it’s a direct quote from Filippo Marinetti’s 1909 “Futurist Manifesto” with “technology” substituted for the original’s “poetry.” I wonder if Marinetti wrote any other famous manifestos?

In case we somehow still don’t get it, Andreessen specifies that “The Enemy” is “the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable…” and then drops an extended Nietzsche excerpt. You know who else hated the ivory tower and loved Nietzsche?…

Industrial Society and Its Future (Are Gonna Be Great!),” from @rusty.todayintabs.com. Do yourself the favor of subscribing to Today in Tabs— it’s marvelous.

* Filippo Tommaso Marinetti, Manifesti Futuristi

###

As we reprioritize prudence, we might recall that it was on this date in 1896 that Richard F. Outcault‘s comic strip Hogan’s Alley— featuring “the Yellow Kid” (Mickey Dugan)– debuted in William Randolph Hearst’s New York Journal. While “the Yellow Kid” had appeared irregularly before, it was the first the first full-color comic to be printed regularly (many historians suggest), and one of the earliest in the history of the comic; Outcault’s use of word balloons in the Yellow Kid influenced the basic appearance and use of balloons in subsequent newspaper comic strips and comic books. Outcault’s work aimed at humor and social commentary; but (perhaps ironically) the concept of “yellow journalism” referred to stories which were sensationalized for the sake of selling papers (as in the publications of Hearst and Joseph Pulitzer, an earlier home to sporadic appearances of the Yellow Kid) and was so named after the “Yellow Kid” cartoons.

source

%d bloggers like this: