(Roughly) Daily

“It’s going to be interesting to see how society deals with artificial intelligence”*…

Interesting, yes… and important. Stephanie Losi notes that “in some other fields, insufficient regulation and lax controls can lead to bad outcomes, but it can take years. With AI, insufficient regulation and lax controls could lead to bad outcomes extremely rapidly.” She offers a framework for thinking about the kinds of regulation we might need…

Recent advances in machine learning like DALL-E 2 and Stable Diffusion show the strengths of artificial narrow intelligence. That means they perform specialized tasks instead of general, wide-ranging ones. Artificial narrow intelligence is often regarded as safer than a hypothetical artificial general intelligence (AGI), which could challenge or surpass human intelligence. 

But even within their narrow domains, DALL-E, Stable Diffusion, and similar models are already raising questions like, “What is real art?” And large language models like GPT-3 and CoPilot dangle the promise of intuitive software development sans the detailed syntax knowledge required for traditional programming. Disruption looms large—and imminent. 

One of the challenges of risk management is that technology innovation tends to outpace it. It’s less fun to structure a control framework than it is to walk on fresh snow, so breakthroughs happen and then risk management catches up. But with AI, preventive controls are especially important, because AI is so fast that detective and corrective controls might not have time to take effect. So, making sure controls do keep up with innovation might not be fun or flashy, but it’s vital. Regulation done right could increase the chance that the first (and possibly last) AGI created is not hostile as we would define that term.

In broad strokes, here are some aspects of a control framework for AI…

Eminently worth reading in full: “Possible Paths for AI Regulation.”

See also: “Can regulators keep up with AI in healthcare?” (source of the image above)

And as we ponder constructive constraints, we might keep in mind Gray Scott‘s (seemingly-contradictory) reminder: “The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”

Colin Angle

###

As we practice prudence, we might recall that it was on this date in 1971 that UNIX was released to the world. A multi-tasking, multi-user operating system, it was intitally developed at ATT for use within the Bell System. Unix systems are characterized by a modular design that is sometimes called the “Unix philosophy.” Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a “software tools” movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself– a key part of the Unix philosophy.

Unix’s interactivity made it a perfect medium for TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity. Indeed, Unix was the medium for Arpanet, the forerunner of the internet-as we-know-it.

Ken Thompson (sitting) and Dennis Ritchie, principal developers of Unix, working together at a PDP-11 (source)

Written by (Roughly) Daily

November 3, 2022 at 1:00 am

%d bloggers like this: