(Roughly) Daily

Posts Tagged ‘Bug

“I like to think (it has to be) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters”*…

A.I. pioneer Dario Amodei with a positive scenario for artificial intelligence…

I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one…

How AI could transform the world for the better: “Machines of Loving Grace,” from @DarioAmodei. Eminently worth reading in full…

A (similarly positive, but slightly more focused) piece from a team at Deepmind: “AI for Science.”

Apposite (if not opposite): “Shoggoths amongst us,” from Henry Farrell, and an earlier (R)D, “We ceased to be the lunatic fringe. We’re now the lunatic core.”

See also: “AI Isn’t Your God—But It Might Be Your Intern.”

* Richard Brautigan, “All Watched Over By Machines Of Loving Grace” (the source of Amodei’s title)

###

As we ponder the perplexities of progress, we might send carefully-calculated birthday greetings to Grace Brewster Murray Hopper; she was born on this date in 19o6.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

Source

Written by (Roughly) Daily

December 9, 2024 at 1:00 am

“The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office”*…

For as long as humans have thought, humans have thought about thinking. George Cave on the power and the limits of the metaphors we’ve used to do that…

For thousands of years, humans have described their understanding of intelligence with engineering metaphors. In the 3rd century BCE, the invention of hydraulics popularized the model of fluid flow (“humours”) in the body. This lasted until the 1500s, supplanted by the invention of automata and the idea of humans as complex machines. From electrical and chemical metaphors in the 1700s to advances in communications a century later, each metaphor reflected the most advanced thinking of that era. Today is no different: we talk of brains that store, process and retrieve memories, mirroring the language of computers.

I’ve always believed metaphors to be helpful and productive in communicating unfamiliar concepts. But this fascinating history of cognitive science metaphors shows that flawed metaphors can take hold and limit the scope for alternative ideas. In the worst case, the EU spent 10 years and $1.3 billion building a model of the brain based on the incorrect belief that the brain functions like a computer…

Thinking about thinking, from @George_Cave in @the_prepared.

Apposite: “Finding Language in the Brain.”

* Robert Frost

###

As we cogitate on cognition, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

Written by (Roughly) Daily

December 9, 2022 at 1:00 am

“Knowledge of means without knowledge of ends is animal training”*…

Spy vs.Spy

According to a March 1967 report entitled “Views on Trained Cats [Redacted] for [Redacted] Use,” the CIA stuffed a real, live cat with electronic spying equipment and attempted to train it to spy on America’s Cold War rivals.  The report states that Acoustic Kitty (as the project is commonly known) was a “remarkable scientific achievement.” Unfortunately, the report also states that the continued use of live cats as eavesdropping devices “would not be practical.”

According to Victor Marchetti [an ex-Deputy Director of the CIA]: “A lot of money was spent. They slit the cat open, put batteries in him, wired him up. The tail was used as an antenna. They made a monstrosity. They tested him and tested him. They found he would walk off the job when he got hungry, so they put another wire in to override that. Finally they’re ready. They took it out to a park and pointed it at a park bench and said, ‘Listen to those two guys…’ They put him out of the van, and a taxi comes and runs him over. There they were, sitting in the van with all those dials, and the cat was dead!”…

Acoustic Kitty

For more on animal training adventures in the security services, see “The CIA’s Most Highly-Trained Spies Weren’t Even Human.”

Steve Martin

###

As we study subterfuge, we might recall that it was on this date in 1974 that transcripts of the audiotaped White House conversations between President Richard Nixon and Chief of Staff Bob Haldeman were released to the public. Considered at the time a “smoking gun,” the transcripts confirmed Nixon’s involvement in the Watergate cover-up– and precipitated Nixon’s resignation three days later.

Transcripts of the Watergate tapes arriving on Capitol Hill to be turned over to the House Judiciary Committee.

source

“Reality is frequently inaccurate”*…

Machine learning and what it may teach us about reality…

Our latest paradigmatic technology, machine learning, may be revealing the everyday world as more accidental than rule-governed. If so, it will be because machine learning gains its epistemological power from its freedom from the sort of generalisations that we humans can understand or apply.

The opacity of machine learning systems raises serious concerns about their trustworthiness and their tendency towards bias. But the brute fact that they work could be bringing us to a new understanding and experience of what the world is and our role in it…

The world is a black box full of extreme specificity: it might be predictable but that doesn’t mean it is understandable: “Learn from Machine Learning,” by David Weinberger (@dweinberger) in @aeonmag.

(image above: source)

* Douglas Adams, The Restaurant at the End of the Universe

###

As ruminate on the real, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

“We often plough so much energy into the big picture, we forget the pixels”*…

Alvy Ray Smith (see also here) was born before computers, made his first computer graphic in 1964, cofounded Pixar, was the first director of computer graphics at Lucasfilm, and the first graphics fellow at Microsoft. He is the author of the terrific new book A Biography of the Pixel (2021), from which, this excerpt…

I have billions of pixels in my cellphone, and you probably do too. But what is a pixel? Why do so many people think that pixels are little abutting squares? Now that we’re aswim in an ocean of zettapixels (21 zeros), it’s time to understand what they are. The underlying idea – a repackaging of infinity – is subtle and beautiful. Far from being squares or dots that ‘sort of’ approximate a smooth visual scene, pixels are the profound and exact concept at the heart of all the images that surround us – the elementary particles of modern pictures.

This brief history of the pixel begins with Joseph Fourier in the French Revolution and ends in the year 2000 – the recent millennium. I strip away the usual mathematical baggage that hides the pixel from ordinary view, and then present a way of looking at what it has wrought.

The millennium is a suitable endpoint because it marked what’s called the great digital convergence, an immense but uncelebrated event, when all the old analogue media types coalesced into the one digital medium. The era of digital light – all pictures, for whatever purposes, made of pixels – thus quietly began. It’s a vast field: books, movies, television, electronic games, cellphones displays, app interfaces, virtual reality, weather satellite images, Mars rover pictures – to mention a few categories – even parking meters and dashboards. Nearly all pictures in the world today are digital light, including nearly all the printed words. In fact, because of the digital explosion, this includes nearly all the pictures ever made. Art museums and kindergartens are among the few remaining analogue bastions, where pictures fashioned from old media can reliably be found…

An exact mathematical concept, pixels are the elementary particles of pictures, based on a subtle unpacking of infinity: “Pixel: a biography,” from @alvyray.

Dame Silvia Cartwright

###

As we ruminate on resolution, we might recall that it was on this date in 1947 that fabled computer scientist Grace Hopper (see here and here), then a programmer at Harvard’s Harvard’s Mark II Aiken Relay computer, found and documented the first computer “bug”– an insect that had lodged in the works.  The incident is recorded in Hopper’s logbook alongside the offending moth, taped to the logbook page: “15:45 Relay #70 Panel F (moth) in relay. First actual case of bug being found.”

This anecdote has led to Hopper being pretty widely credited with coining the term “bug” (and ultimately “de-bug”) in its technological usage… but the term actually dates back at least to Thomas Edison…

bug
Grace Hopper’s log entry (source)

Written by (Roughly) Daily

September 9, 2021 at 1:00 am