(Roughly) Daily

Posts Tagged ‘computing

“On the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on our way to using it to create a world of suffering and chaos. Paradoxical, no?”*…

Joseph Weizenbaum, a distinguished professor at MIT, was one of the fathers of artificial intelligence and computing as we know it; he was also one of his earliest critics– one whose concerns remain all too current. After a review of his warnings, Librarian Shipwreck shares a still-relevant set of questions Weizenbaum proposed…

At the end of his essay “Once more—A Computer Revolution” which appeared in the Bulletin of the Atomic Scientists in 1978, Weizenbaum concluded with a set of five questions. As he put it, these were the sorts of questions that “are almost never asked” when it comes to this or that new computer related development. These questions did not lend themselves to simple yes or no answers, but instead called for serious debate and introspection. Thus, in the spirit of that article, let us conclude this piece not with definitive answers, but with more questions for all of us to contemplate. Questions that were “almost never asked” in 1978, and which are still “almost never asked” in 2023. They are as follows:

• Who is the beneficiary of our much-advertised technological progress and who are its victims?

• What limits ought we, the people generally and scientists and engineers particularly, to impose on the application of computation to human affairs?

• What is the impact of the computer, not only on the economies of the world or on the war potential of nations, etc…but on the self-image of human beings and on human dignity?

• What irreversible forces is our worship of high technology, symbolized most starkly by the computer, bringing into play?

• Will our children be able to live with the world we are here and now constructing?

As Weizenbaum put it “much depends on answers to these questions.”

Much still depends on answers to these questions.

Eminently worth reading in full: “‘Computers enable fantasies’ – on the continued relevance of Weizenbaum’s warnings,” from @libshipwreck.

See also: “An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum.”

* Joseph Weizenbaum (1983)

###

As we stay grounded, we might spare a thought for George Stibitz; he died on this date in 1995. A Bell Labs researcher, he was known for his work in the 1930s and 1940s on the realization of Boolean logic digital circuits using electromechanical relays as the switching element– work for which he is internationally recognized as one of the fathers of the modern digital computer.

In 1937, Stibitz, a scientist at Bell Laboratories built a digital machine based on relays, flashlight bulbs, and metal strips cut from tin-cans. He called it the “Model K” because most of it was constructed on his kitchen table. It worked on the principle that if two relays were activated they caused a third relay to become active, where this third relay represented the sum of the operation. Then, in 1940, he gave a demonstration of the first remote operation of a computer.

source

“The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office”*…

For as long as humans have thought, humans have thought about thinking. George Cave on the power and the limits of the metaphors we’ve used to do that…

For thousands of years, humans have described their understanding of intelligence with engineering metaphors. In the 3rd century BCE, the invention of hydraulics popularized the model of fluid flow (“humours”) in the body. This lasted until the 1500s, supplanted by the invention of automata and the idea of humans as complex machines. From electrical and chemical metaphors in the 1700s to advances in communications a century later, each metaphor reflected the most advanced thinking of that era. Today is no different: we talk of brains that store, process and retrieve memories, mirroring the language of computers.

I’ve always believed metaphors to be helpful and productive in communicating unfamiliar concepts. But this fascinating history of cognitive science metaphors shows that flawed metaphors can take hold and limit the scope for alternative ideas. In the worst case, the EU spent 10 years and $1.3 billion building a model of the brain based on the incorrect belief that the brain functions like a computer…

Thinking about thinking, from @George_Cave in @the_prepared.

Apposite: “Finding Language in the Brain.”

* Robert Frost

###

As we cogitate on cognition, we might send carefully-computed birthday greetings to Grace Brewster Murray Hopper.  A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.

Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source

Written by (Roughly) Daily

December 9, 2022 at 1:00 am

“Prediction is very difficult, especially if it’s about the future”*…

… but maybe not as hard as it once was. While multi-agent artificial intelligence was first used in the sixties, advances in technology have made it an extremely sophisticated modeling– and prediction– tool. As Derek Beres explains, it can be a powerfully-accurate prediction engine… and it can potentially also be an equally powerful tool for manipulation…

The debate over free will is ancient, yet data don’t lie — and we have been giving tech companies access to our deepest secrets… We like to believe we’re not predictable, but that’s simply not true…

Multi-agent artificial intelligence (MAAI) is predictive modeling at its most advanced. It has been used for years to create digital societies that mimic real ones with stunningly accurate results. In an age of big data, there exists more information about our habits — political, social, fiscal — than ever before. As we feed them information on a daily basis, their ability to predict the future is getting better.

[And] given the current political climate around the planet… MAAI will most certainly be put to insidious means. With in-depth knowledge comes plenty of opportunities for exploitation and manipulation, no deepfake required. The intelligence might be artificial, but the target audience most certainly is not…

Move over deepfakes; multi-agent artificial intelligence is poised to manipulate your mind: “Can AI simulations predict the future?,” from @derekberes at @bigthink.

[Image above: source]

* Niels Bohr

###

As we analyze augury, we might note that today is National Computer Security Day. It was inaugurated by the Association for Computing Machinery (ACM) in 1988, shortly after an attack on ARPANET (the forerunner of the internet as we know it) that damaged several of the connected machines. Meant to call attention to the need for constant need for attention to security, it’s a great day to change all of one’s passwords.

source

Written by (Roughly) Daily

November 30, 2022 at 1:00 am

“There are only two different types of companies in the world: those that have been breached and know it and those that have been breached and don’t know it.”*…

Enrique Mendoza Tincopa (and here) with a visualization of what’s on offer on the dark web and what it costs…

Did you know that the internet you’re familiar with is only 10% of the total data that makes up the World Wide Web?

The rest of the web is hidden from plain sight, and requires special access to view. It’s known as the Deep Web, and nestled far down in the depths of it is a dark, sometimes dangerous place, known as the darknet, or Dark Web

Visual Capitalist

For a larger version, click here

And for a look at the research that underlies the graphic, click here.

What’s your personal information worth? “The Dark Web Price Index 2022,” from @DatavizAdventuR via @VisualCap.

(Image at top: source)

Ted Schlein

###

As we harden our defenses, we might recall that it was on this date in 1994 that arguments began in the case of United States vs. David LaMacchia, in which David LaMacchia stood accused of Conspiracy to Commit Wire Fraud. He had allegedly operated the “Cynosure” bulletin board system (BBS) for six weeks, to hosting pirated software on Massachusetts Institute of Technology (MIT) servers. Federal prosecutors didn’t directly charge LaMacchia with violating copyright statutes; rather they chose to charge him under a federal wire fraud statute that had been enacted in 1952 to prevent the use of telephone systems for interstate fraud. But the court ruled that as he had no commercial motive (he was not charging for the shared software), copyright violation could not be prosecuted under the wire fraud statute; LaMacchia was found not guilty– giving rise to what became known as “the LaMacchia loophole”… and spurring legislative action to try to close that gap.

Background documents from the case are here.

The MIT student paper, covering the case (source)

Written by (Roughly) Daily

November 18, 2022 at 1:00 am

“It’s going to be interesting to see how society deals with artificial intelligence”*…

Interesting, yes… and important. Stephanie Losi notes that “in some other fields, insufficient regulation and lax controls can lead to bad outcomes, but it can take years. With AI, insufficient regulation and lax controls could lead to bad outcomes extremely rapidly.” She offers a framework for thinking about the kinds of regulation we might need…

Recent advances in machine learning like DALL-E 2 and Stable Diffusion show the strengths of artificial narrow intelligence. That means they perform specialized tasks instead of general, wide-ranging ones. Artificial narrow intelligence is often regarded as safer than a hypothetical artificial general intelligence (AGI), which could challenge or surpass human intelligence. 

But even within their narrow domains, DALL-E, Stable Diffusion, and similar models are already raising questions like, “What is real art?” And large language models like GPT-3 and CoPilot dangle the promise of intuitive software development sans the detailed syntax knowledge required for traditional programming. Disruption looms large—and imminent. 

One of the challenges of risk management is that technology innovation tends to outpace it. It’s less fun to structure a control framework than it is to walk on fresh snow, so breakthroughs happen and then risk management catches up. But with AI, preventive controls are especially important, because AI is so fast that detective and corrective controls might not have time to take effect. So, making sure controls do keep up with innovation might not be fun or flashy, but it’s vital. Regulation done right could increase the chance that the first (and possibly last) AGI created is not hostile as we would define that term.

In broad strokes, here are some aspects of a control framework for AI…

Eminently worth reading in full: “Possible Paths for AI Regulation.”

See also: “Can regulators keep up with AI in healthcare?” (source of the image above)

And as we ponder constructive constraints, we might keep in mind Gray Scott‘s (seemingly-contradictory) reminder: “The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”

Colin Angle

###

As we practice prudence, we might recall that it was on this date in 1971 that UNIX was released to the world. A multi-tasking, multi-user operating system, it was intitally developed at ATT for use within the Bell System. Unix systems are characterized by a modular design that is sometimes called the “Unix philosophy.” Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a “software tools” movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself– a key part of the Unix philosophy.

Unix’s interactivity made it a perfect medium for TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity. Indeed, Unix was the medium for Arpanet, the forerunner of the internet-as we-know-it.

Ken Thompson (sitting) and Dennis Ritchie, principal developers of Unix, working together at a PDP-11 (source)

Written by (Roughly) Daily

November 3, 2022 at 1:00 am

%d bloggers like this: