(Roughly) Daily

Posts Tagged ‘operating system

“The functionalist organization, by privileging progress (i.e. time), causes the condition of its own possibility”*…

Meet the new boss, painfully similar to the old boss…

While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…

From Henry Farrell and Marion Fourcade, a reminder that’s what’s old is new again: “The Moral Economy of High-Tech Modernism,” in an issue of Daedalus, edited by Farrell and Margaret Levi (@margaretlevi).

See also: “The Algorithm Society and Its Discontents” (or here) by Brad DeLong (@delong).

Apposite: “What Greek myths can teach us about the dangers of AI.”

(Image above: source)

* “The functionalist organization, by privileging progress (i.e. time), causes the condition of its own possibility–space itself–to be forgotten: space thus becomes the blind spot in a scientific and political technology. This is the way in which the Concept-city functions: a place of transformations and appropriations, the object of various kinds of interference but also a subject that is constantly enriched by new attributes, it is simultaneously the machinery and the hero of modernity.” – Michel de Certeau


As we ponder platforms, we might recall that it was on this date in 1955 that the first computer operating system was demonstrated…

Computer pioneer Doug Ross demonstrates the Director tape for MIT’s Whirlwind machine. It’s a new idea: a permanent set of instructions on how the computer should operate.

Six years in the making, MIT’s Whirlwind computer was the first digital computer that could display real-time text and graphics on a video terminal, which was then just a large oscilloscope screen. Whirlwind used 4,500 vacuum tubes to process data…

Another one of its contributions was Director, a set of programming instructions…

March 8, 1955: The Mother of All Operating Systems

The first permanent set of instructions for a computer, it was in essence the first operating system. Loaded by paper tape, Director allowed operators to load multiple problems in Whirlwind by taking advantage of newer, faster photoelectric tape reader technology, eliminating the need for manual human intervention in changing tapes on older mechanical tape readers.

Ross explaining the system (source)

“It’s going to be interesting to see how society deals with artificial intelligence”*…

Interesting, yes… and important. Stephanie Losi notes that “in some other fields, insufficient regulation and lax controls can lead to bad outcomes, but it can take years. With AI, insufficient regulation and lax controls could lead to bad outcomes extremely rapidly.” She offers a framework for thinking about the kinds of regulation we might need…

Recent advances in machine learning like DALL-E 2 and Stable Diffusion show the strengths of artificial narrow intelligence. That means they perform specialized tasks instead of general, wide-ranging ones. Artificial narrow intelligence is often regarded as safer than a hypothetical artificial general intelligence (AGI), which could challenge or surpass human intelligence. 

But even within their narrow domains, DALL-E, Stable Diffusion, and similar models are already raising questions like, “What is real art?” And large language models like GPT-3 and CoPilot dangle the promise of intuitive software development sans the detailed syntax knowledge required for traditional programming. Disruption looms large—and imminent. 

One of the challenges of risk management is that technology innovation tends to outpace it. It’s less fun to structure a control framework than it is to walk on fresh snow, so breakthroughs happen and then risk management catches up. But with AI, preventive controls are especially important, because AI is so fast that detective and corrective controls might not have time to take effect. So, making sure controls do keep up with innovation might not be fun or flashy, but it’s vital. Regulation done right could increase the chance that the first (and possibly last) AGI created is not hostile as we would define that term.

In broad strokes, here are some aspects of a control framework for AI…

Eminently worth reading in full: “Possible Paths for AI Regulation.”

See also: “Can regulators keep up with AI in healthcare?” (source of the image above)

And as we ponder constructive constraints, we might keep in mind Gray Scott‘s (seemingly-contradictory) reminder: “The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”

Colin Angle


As we practice prudence, we might recall that it was on this date in 1971 that UNIX was released to the world. A multi-tasking, multi-user operating system, it was intitally developed at ATT for use within the Bell System. Unix systems are characterized by a modular design that is sometimes called the “Unix philosophy.” Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a “software tools” movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself– a key part of the Unix philosophy.

Unix’s interactivity made it a perfect medium for TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity. Indeed, Unix was the medium for Arpanet, the forerunner of the internet-as we-know-it.

Ken Thompson (sitting) and Dennis Ritchie, principal developers of Unix, working together at a PDP-11 (source)

Written by (Roughly) Daily

November 3, 2022 at 1:00 am

%d bloggers like this: