Posts Tagged ‘AI’
“What we need is the celestial fire to change the flint into the transparent crystal, bright and clear”*…

… or so it used to be. Scientists at Google DeepMind and the Lawrence Berkeley National Laboratory have applied AI to the task– with encouraging results…
Modern technologies from computer chips and batteries to solar panels rely on inorganic crystals. To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation.
… in a paper published in Nature, we share the discovery of 2.2 million new crystals – equivalent to nearly 800 years’ worth of knowledge. We introduce Graph Networks for Materials Exploration (GNoME), our new deep learning tool that dramatically increases the speed and efficiency of discovery by predicting the stability of new materials.
With GNoME, we’ve multiplied the number of technologically viable materials known to humanity. Of its 2.2 million predictions, 380,000 are the most stable, making them promising candidates for experimental synthesis. Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.
GNoME shows the potential of using AI to discover and develop new materials at scale. External researchers in labs around the world have independently created 736 of these new structures experimentally in concurrent work. In partnership with Google DeepMind, a team of researchers at the Lawrence Berkeley National Laboratory has also published a second paper in Nature that shows how our AI predictions can be leveraged for autonomous material synthesis.
We’ve made GNoME’s predictions available to the research community. We will be contributing 380,000 materials that we predict to be stable to the Materials Project, which is now processing the compounds and adding them into its online database. We hope these resources will drive forward research into inorganic crystals, and unlock the promise of machine learning tools as guides for experimentation…
GNoME suggests that materials science may be the next frontier to be turbocharged by artificial intelligence (see this earlier example from biotech): “Millions of new materials discovered with deep learning.”
* Henry Wadsworth Longfellow
###
As we drive discovery, we might recall that it was on this date in 1942 that a team of scientists led by Enrico Fermi, working inside an enormous tent on a squash court under the stands of the University of Chicago’s Stagg Field, achieved the first controlled nuclear fission chain reaction… laying the foundation for the atomic bomb and later, nuclear power generation– that’s to say, inaugurating the Atomic Age.
“…the Italian Navigator has just landed in the New World…”
– Coded telephone message confirming first self-sustaining nuclear chain reaction, December 2, 1942.

Indeed, exactly 15 years later, on this date in 1957, the world’s first full-scale atomic electric power plant devoted exclusively to peacetime uses, the Shippingport Atomic Power Station, reached criticality; the first power was produced 16 days later, after engineers integrated the generator into the distribution grid of Duquesne Light Company.

“Even a fool who keeps silent is considered wise; when he closes his lips, he is deemed intelligent.”*…
A substantial– and important– look at a troubling current aflow in the world of technology today: Emily Gorcenski on the millenarianism and manifest destiny of AI and techno-futurism…
… Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.
Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.
…
There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.
Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.
The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.
…
I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.
…
… The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.
Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence, so we pivoted to artificial life. We failed to make artificial life, so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.
This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.
David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies…
Eminently worth reading in full: “Making God,” from @EmilyGorcenski (a relay to mastodon and BlueSky).
See also: “Effective Obfuscation,” from Molly White (@molly0xFFF) and this thread from Emily Bender (@emilymbender).
* Proverbs 17:28
###
As we resist recruitment, we might spare a thought for Ada Lovelace (or, more properly, Augusta Ada King, Countess of Lovelace, née Byron); she died on this date in 1852. A mathematician and writer, she is chiefly remembered for her work on Charles Babbage‘s proposed mechanical general-purpose computer, the Analytical Engine— for which she authored what can reasonably be considered the first “computer program.” She was the first to recognize that the machine had applications beyond pure calculation, and so is one of the “parents” of the modern computer.

“Humanity is acquiring all the right technology for all the wrong reasons”*…
Further to yesterday’s post on the poverty created by manufacturing displacement, and in the wake of the sturm und drang occasioned by the coup at OpenAI, the estimable Rana Foroohar on the politics of AI…
… Consider that current politics in the developed world — from the rise of Donald Trump to the growth of far right and far left politics in Europe — stem in large part from disruptions to the industrial workforce due to technology and globalisation. The hollowing out of manufacturing work led to more populist and fractious politics, as countries tried (and often failed) to balance the needs of the global marketplace with those of voters.
Now consider that this past summer, the OECD warned that white-collar, skilled labour representing about a third of the workforce in the US and other rich countries is most at risk from disruption by AI. We are already seeing this happen in office work — with women and Asians particularly at risk since they hold a disproportionate amount of roles in question. As our colleague John Burn-Murdoch has charted [image above], online freelancers are especially vulnerable.
So, what happens when you add more than three times as many workers, in new subgroups, to the cauldron of angry white men that have seen their jobs automated or outsourced in recent decades? Nothing good. I’m always struck when CEOs like Elon Musk proclaim that we are headed towards a world without work as if this is a good thing. As academics like Angus Deaton and Anne Case have laid out for some time now, a world without work very often leads to “deaths of despair,” broken families, and all sorts of social and political ills.
Now, to be fair, Goldman Sachs has estimated that the productivity impact of AI could double the recent rate — mirroring the impact of the PC revolution. This would lead to major growth which could, if widely shared, do everything from cut child poverty to reduce our burgeoning deficit.
But that’s only if it’s shared. And the historical trend lines for technology aren’t good in that sense — technology often widens wealth disparities before labour movements and government regulation equalise things. (Think about the turn of the 20th century, up until the 1930s). But the depth and breadth of AI disruption may well cause unprecedented levels of global labour displacement and political unrest.
I am getting more and more worried that this is where we may be heading. Consider this new National Bureau of Economic Research working paper, which analyses why AI will be as transformative as the industrial revolution. It also predicts, however, that there is a very good chance that it lowers the labour share radically, even pushing it to zero, in lieu of policies that prevent this (the wonderful Daron Acemoglu and Simon Johnson make similar points, and lay out the history of such tech transformation in their book Power and Progress…
We can’t educate ourselves out of this problem fast enough (or perhaps at all). We also can’t count on universal basic income to fix everything, no matter how generous it could be, because people simply need work to function (as Freud said, it’s all about work and love). Economists and political scientists have been pondering the existential risks of AI — from nuclear war to a pandemic — for years. But I wonder if the real existential crisis isn’t a massive crisis of meaning, and the resulting politics of despair, as work is displaced faster than we can fix the problem…
Everyone’s worried about AI, but are we worried about the right thing? “The politics of AI,” from @RanaForoohar in @FT.
See also: Henry Farrell‘s “What OpenAI shares with Scientology” (“strange beliefs, fights over money, and bad science fiction”) and Dave Karpf‘s “On OpenAI: Let Them Fight.” (“It’s chaos… And that’s a good thing.”)
For a different point-of-view, see: “OpenAI and the Biggest Threat in the History of Humanity,” from Tomás Pueyo.
And for deep background, read Benjamin Labatut‘s remarkable The MANIAC.
* R. Buckminster Fuller
###
As we equilibrate, we might recall that it was on this date in 1874 that electrical engineer, inventor, and physicist Ferdinand Braun published a paper in the Annalen der Physik und Chemie describing his discovery of the electrical rectifier effect, the original practical semiconductor device.
(Braun is better known for his contributions to the development of radio and television technology: he shared the 1909 Nobel Prize in Physics with Guglielmo Marconi “for their contributions to the development of wireless telegraphy” (Braun invented the crystal tuner and the phased-array antenna); was a founder of Telefunken, one of the pioneering communications and television companies; and (as the builder of the first cathode ray tube) has been called the “father of television” (shared with inventors like Paul Gottlieb Nipkow).
“Our poetry is courage, audacity and revolt”*…
One of your correspondent’s daily delights is Rusty Foster‘s Today in Tabs, a newsletter that informs and provokes as it, inevitably, amuses. Take for example this excerpt from Monday’s installment, subtitled “Today in Fascism”…
“Could the end of the AI hype cycle be in sight?” asked TechBrew’s Patrick Kulp and precisely on time today here’s a doorstop of LinkedIn-brained crypto-(but-not-too-crypto)-fascism from Egg Andreessen titled “The Techno-Optimist Manifesto.” It’s very long, and you should absolutely not read it, but it’s useful for finally making explicit the fascist philosophy that people like Brad Johnson have long argued is growing steadily less implicit in Silicon Valley’s techno-triumphalism.
“Techno-Optimists believe that societies, like sharks, grow or die,” writes Egg, and Rose Eveleth was already like 🤔:
But before going fully mask-off, Andreessen has some crazy things to say about AI.
There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire.
But AI can surely help us kill the right people in war much more efficiently, yes? Still, he needs to make a pseudo-moral case to keep pumping cash into the AI bubble, so we get this:
We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
Got that, Untermenschen? Regulation == murder. [Followed by the photo at the top]
But let’s get to the good stuff, in the section titled “Becoming Technological Supermen” (I swear I’m not making this up).
We believe in the romance of technology, of industry. The eros of the train, the car, the electric light, the skyscraper. And the microchip, the neural network, the rocket, the split atom.
We believe in adventure. Undertaking the Hero’s Journey, rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community.
To paraphrase a manifesto of a different time and place: “Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”
The first two paragraphs here are just bonkers. He’s horny for trains? I guess he saw North By Northwest at an impressionable age. But that last paragraph contains the only quote in the whole piece that isn’t attributed to a specific source, and it turns out it’s not really a paraphrase, it’s a direct quote from Filippo Marinetti’s 1909 “Futurist Manifesto” with “technology” substituted for the original’s “poetry.” I wonder if Marinetti wrote any other famous manifestos?
In case we somehow still don’t get it, Andreessen specifies that “The Enemy” is “the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable…” and then drops an extended Nietzsche excerpt. You know who else hated the ivory tower and loved Nietzsche?…
“Industrial Society and Its Future (Are Gonna Be Great!),” from @rusty.todayintabs.com. Do yourself the favor of subscribing to Today in Tabs— it’s marvelous.
* Filippo Tommaso Marinetti, Manifesti Futuristi
###
As we reprioritize prudence, we might recall that it was on this date in 1896 that Richard F. Outcault‘s comic strip Hogan’s Alley— featuring “the Yellow Kid” (Mickey Dugan)– debuted in William Randolph Hearst’s New York Journal. While “the Yellow Kid” had appeared irregularly before, it was the first the first full-color comic to be printed regularly (many historians suggest), and one of the earliest in the history of the comic; Outcault’s use of word balloons in the Yellow Kid influenced the basic appearance and use of balloons in subsequent newspaper comic strips and comic books. Outcault’s work aimed at humor and social commentary; but (perhaps ironically) the concept of “yellow journalism” referred to stories which were sensationalized for the sake of selling papers (as in the publications of Hearst and Joseph Pulitzer, an earlier home to sporadic appearances of the Yellow Kid) and was so named after the “Yellow Kid” cartoons.
You must be logged in to post a comment.