(Roughly) Daily

Posts Tagged ‘nuclear power

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

There is a wide range of opinions on AI and what it might portend. While artificial intelligence has its skeptics, and some argue that we should slow its development, AI is here, and it’s only getting warmed up (c.f.: Ezra Klein‘s “This Changes Everything“).

As applications multiply (and get more sophisticated), there’s an understandable concern about its impact on employment. While tools like ChatGPT and DALL·E 2 are roiling the creative sphere, many economists are looking more broadly…

Like many revolutionary technologies before it, AI is likely to eliminate jobs. But, as has been the case in the past, experts argue, AI will likely offset much of that by spurring the creation of new jobs in addition to enhancing many existing jobs. The big question is: what sort of jobs?

“AI will wipe out a lot of current jobs, as has happened with all past technologies,” said Lawrence Katz, a labor economist at Harvard. “But I have no reason to think that AI and robots won’t continue changing the mix of jobs. The question is: will the change in the mix of jobs exacerbate existing inequalities? Will AI raise productivity so much that even as it displaces a lot of jobs, it creates new ones and raises living standards?”

Anu Madgavkar, who leads labor market research at the McKinsey Global Institute, estimates that one in four workers in the US are going to see more AI and technology adopted in their jobs. She said 50-60% of companies say they are pursuing AI-related projects. “So one way or the other people are going to have to learn to work with AI,” Madgavkar said.

While past rounds of automation affected factory jobs most, Madgavkar said that AI will hit white-collar jobs most. “It’s increasingly going into office-based work and customer service and sales,” she said. “They are the job categories that will have the highest rate of automation adoption and the biggest displacement. These workers will have to work with it or move into different skills.”…

US experts warn AI likely to kill off jobs – and widen wealth inequality

But most of these visions are rooted in an appreciation of what AI can currently do (and the likely extensions of those capabilities). What if AI develops in startling, discontinuous ways– what if it exhibits “emergence”?…

… Recent investigations… have revealed that LLMs (large language models) can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes…

The Unpredictable Abilities Emerging From Large AI Models

Perhaps we should be thinking about AI not just functionally, but also philosophically…

The development of Artificial Intelligence is a scientific and engineering project, but it’s also a philosophical one. Lingering debates in the philosophy of mind have the potential to be substantially demystified, if not outright resolved, through the creation of artificial minds that parallel capabilities once thought to be the exclusive province of the human brain.

And since our brain is how we know and interface with the world more generally, understanding how the mind works can shed light on every other corner of philosophy as well, from epistemology to metaethics. My view is thus the exact opposite of Noam Chomsky’s, who argues that the success of Large Language Models is of limited scientific or philosophical import, since such models ultimately reduce to giant inscrutable matrices. On the contrary, the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum — one Chomsky chooses to simply dismiss a priori.

Biological brains differ in important ways from artificial neural networks, but the fact that the latter can emulate the capacities of the former really does contribute to human self-understanding. For one, it represents an independent line of evidence that the brain is indeed computational. But that’s just the tip of the iceberg. The success of LLMs may even help settle longstanding debates on the nature of meaning itself…

We’re all Wittgensteinians now

And maybe we should be careful about “othering” AI (or, for that matter, any of the other forms for intelligence that surround us)…

I don’t think there is such a thing as an artificial intelligence. There are multiple intelligences, many ways of doing intelligence. What I envisage to be more useful and interesting than artificial intelligence as we currently conceive of it—which is this incredibly reduced version of human intelligence— is something more distributed, more widely empowered, and more diverse than singular intelligence would allow for. It’s actually a conversation between multiple intelligences, focused on some narrow goals. I have a new, very long-term, very nascent project I’m calling Server Farm. And the vision of Server Farm is to create a setting in which multiple intelligences could work on a problem together. Those intelligences would be drawn from all different kinds of life. That could include computers, but it could also include fungi and plants and animals in some kind of information-sharing processing arrangement. The point is that it would involve more than one kind of thinking, happening in dialogue and relationship with each other.

James Bridle, “There’s Nothing Unnatural About a Computer

In the end, Tyler Cowan suggests, we should keep developing AI…

…what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?…

We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge…

Existential risk, AI, and the inevitable turn in human history

Still, we’re human, and we would do well, Samuel Arbesman suggests, to use the best of our human “tools”– the humanities– to understand AI…

So go study the concepts of narrative technique and use them to elucidate the behavior of LLMs. Or examine the rhetorical devices that writers and speakers have been using for millennia—and which GPT models has imbibed—and figure out how to use their “physical” principles in relating to these language models.

Ultimately, we need a deeper kind of cultural and humanistic competence, one that doesn’t just vaguely gesture at certain parts of history or specific literary styles. It’s still early days, but we need more of this thinking. To quote Hollis Robbins again: “Nobody yet knows what cultural competence will be in the AI era.” But we must begin to work this out.

AI, Semiotic Physics, and the Opcodes of Story World

All of which is to suggest that we are faced with a future that may well contain currently-unimaginable capabilities, that can accrue as threats or (and) as opportunities. So, as the estimable Jaron Lanier reminds us, we need to remain centered…

“From my perspective,” he says, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”…

The way to ensure that we are sufficiently sane to survive is to remember it’s our humanness that makes us unique…

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’

All of the above-sampled pieces are eminently worth reading in full.

Apposite (and offered without comment): Theta Noir

[Image above: source]

* Alan Kay

###

As we ponder progress, we might recall that it was on this date in 1979 that operators failed to notice that a relief valve was stuck open in the primary coolant system of Three Mile Island’s Unit 2 nuclear reactor following an unexpected shutdown. Consequently, enough coolant drained out of the system to allow the core to overheat and partially melt down– the worst commercial nuclear accident in American history.

Three Mile Island Nuclear Power Plant, near Harrisburg, PA

“Better to see something once than to hear about it a thousand times”*…

Stalking Chernobyl

In recent years, the Zone, a highly restricted area in northern Ukraine that surrounds the site of the 1986 nuclear disaster, has become a tourist hotspot. Each morning, tour buses queue at the entry checkpoint where a souvenir shop plastered with nuclear warning symbols peddles neon keyrings and radiation suits. The guides’ t-shirts read: “Follow me and you will survive”. In fact, the dangers are minimal. Along their tightly demarcated routes, these visitors will be exposed to less radiation than during a routine x-ray.

Existing in the shadows of this highly commodified industry is the secretive subculture of the “stalkers”: mostly young Ukrainian men who sneak into the Zone illegally to explore the vast wilderness on their own terms. The name originates from the 1972 Russian science fiction novel Roadside Picnic. Written by brothers Arkady and Boris Strugatsky, it tells the story of contaminated “zones” created on Earth by aliens, in which rogue stalkers roam, hoping to recover valuable alien technology. The book inspired Andrei Tarkovsky’s 1979 cult-classic film Stalker.

Beyond youthful rebellion, the motivations of the modern stalkers are complex, and speak to the national trauma that resulted from a tragedy whose effects will be felt for generations. And now there is another side to the practice. Enterprising stalkers have started offering their own “illegal tours” to travellers seeking a less restricted (and therefore more dangerous) experience of the Exclusion Zone. I joined one such tour in an effort to discover why visitors might chose a stalker over an official guide. Can a subculture that is so tied to deep wells of personal and national loss really offer something of value to an outsider?…

Accompany Aram Balakjian on a beautifully-photographed expedition through the forbidden area: “Into the Zone: 4 days inside Chernobyl’s secretive ‘stalker’ culture.”

* Uzbek proverb

###

As we take the tour, we might recall that it was on this date in 2000, via an announcement by then Secretary of Energy Bill Richardson, and after decades of denial, that a U.S. government study conceded that cancer and premature deaths of workers at 14 nuclear weapons plants since WW II were caused by radiation and chemicals.

nuke source

 

 

Written by (Roughly) Daily

January 28, 2019 at 1:01 am

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”*…

 

quantum computing

Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.

We’ve been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence.” We’ve been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape.” We’ve even been told that “the encryption that protects the world’s most sensitive data may soon be broken” by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.

Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.

It’s become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world’s top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.

In light of all this, it’s natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future.” Having spent decades conducting research in quantum and condensed-matter physics, I’ve developed my very pessimistic view. It’s based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work…

Michel Dyakonov makes “The Case Against Quantum Computing.”

* Albert Einstein

###

As we feel the need for speed, we might recall that it was on this date in 1942 that a team of scientists led by Enrico Fermi, working inside an enormous tent on a squash court under the stands of the University of Chicago’s Stagg Field, achieved the first controlled nuclear fission chain reaction… laying the foundation for the atomic bomb and later, nuclear power generation.

“…the Italian Navigator has just landed in the New World…”
– Coded telephone message confirming first self-sustaining nuclear chain reaction, December 2, 1942.

Illustration depicting the scene on Dec. 2, 1942 (Photo copyright of Chicago Historical Society)

source

Indeed, exactly 15 years later, on this date in 1957, the world’s first full-scale atomic electric power plant devoted exclusively to peacetime uses, the Shippingport Atomic Power Station, reached criticality; the first power was produced 16 days later, after engineers integrated the generator into the distribution grid of Duquesne Light Company.

 source

 

Written by (Roughly) Daily

December 2, 2018 at 1:01 am

“If a picture is worth a thousand words, what is reality worth?”*…

 

It is tempting to believe that we live in a time uniquely saturated with images. And indeed, the numbers are staggering: Instagrammers upload about 95 million photos and videos every day. A quarter of Americans use the app, and the vast majority of them are under 40. Because Instagram skews so much younger than Facebook or Twitter, it is where “tastemakers” and “influencers” now live online, and where their audiences spend hours each day making and absorbing visual content. But so much of what seems bleeding edge may well be old hat; the trends, behaviors, and modes of perception and living that so many op-ed columnists and TED-talk gurus attribute to smartphones and other technological advances are rooted in the much older aesthetic of the picturesque.

Wealthy eighteenth-century English travelers… used technology to mediate and pictorialize their experiences of nature just as Instagrammers today hold up their phones and deliberate over filters…

The pre-history of “influencers” and their images: “The Instagrammable Charm of the Bourgeoisie.”

* Marty Rubin

###

As we watch what’s old become new again, we might recall that it was on this date in 1942 that a team of scientists led by Enrico Fermi, working inside an enormous tent on a squash court under the stands of the University of Chicago’s Stagg Field, achieved the first controlled nuclear fission chain reaction… laying the foundation for the atomic bomb and later, nuclear power generation.

“…the Italian Navigator has just landed in the New World…”
– Coded telephone message confirming first self-sustaining nuclear chain reaction, December 2, 1942.

Illustration depicting the scene on Dec. 2, 1942 (Photo copyright of Chicago Historical Society)

source

Indeed, exactly 15 years later, on this date in 1957, the world’s first full-scale atomic electric power plant devoted exclusively to peacetime uses, the Shippingport Atomic Power Station, reached criticality; the first power was produced 16 days later, after engineers integrated the generator into the distribution grid of Duquesne Light Company.

 source

 

Written by (Roughly) Daily

December 2, 2017 at 1:01 am

“Our social tools are not an improvement to modern society, they are a challenge to it”*…

 

The limbic system is the center for pleasure and addiction in the rodent nervous system. In a controlled study on adolescent rats, scientists sought to determine whether or not the levels of dopamine, the “feel good” neurotransmitter, could be maintained in this region over prolonged social media use. With a series of topical content posts, evergreen posts, and meme dissemination, scientists were able to gauge whether or not the “thrill” derived from getting likes, favorites, or retweets was sustainable over a finite period of time…

Rats that only ever received 20-30 likes after sharing a “well-rounded” think piece would enjoy an extremely high level of dopamine if they broke 50 likes on an unexpected political rant declaring that “Trump had finally gone too far.” But, when the same rat racked up similar numbers by acknowledging that his news feed was a “political echo chamber,” activity in this region of the brain slowed down once again…

In short, social media does not prove to be a sustainable source of cognitive reward…

Read the all-too-painfully-relevant “results” in full at Adam Rotstein‘s “Regulation of Dopamine During Social Media Use in Adolescent Rats.”

* Clay Shirky

###

As we burst bubbles, we might recall that it was on this date in 2000 that the nuclear generating facility at Chernobyl in the Ukraine, was (finally) shut down.  14 years earlier, it had been the site of the worst nuclear power plant accident in history (in terms of cost and casualties), one of only two classified as a level 7 event (the maximum classification) on the International Nuclear Event Scale, the other being the Fukushima Daiichi nuclear disaster in Japan in 2011.  On April 26, 1986, Reactor #4 exploded, creating massive damage in site and releasing 9 days of radioactive plumes that spread over Europe and the USSR.  Two were killed in the explosion; 29 died in the immediate aftermath (of acute radiation poisoning).  The remains of Reactor #4 were enclosed in a massive “sarcophagus,” and the other three reactors were returned to service.  One by one, they failed.  The decommissioning held on this date in 2000 was ceremonial.  Reactor #3, the last one standing, had in fact been shut down the previous week because of technical problems. It was restarted– unattached to the national grid and at minimum power output– so that the world would be able to see it symbolically switched off.

The hole where Reactor #4 stood before the accident

source

 

Written by (Roughly) Daily

December 15, 2016 at 1:01 am

%d bloggers like this: