(Roughly) Daily

Posts Tagged ‘computer science

“Please cut off a nanosecond and send it over to me”*…



“Amazing  Grace” Hopper, seminal computer scientist and Rear Admiral in the U.S. Navy, explains a “nanosecond”…

* Grace Hopper


As we celebrate clarity, we might recall that it was on this date in 1961 that Robert Noyce was issued patent number 2981877 for his “semiconductor device-and-lead structure,” the first patent for what would come to be known as the integrated circuit.  In fact another engineer, Jack Kilby, had separately and essentially simultaneously developed the same technology (Kilby’s design was rooted in germanium; Noyce’s in silicon) and had filed a few months earlier than Noyce… a fact that was recognized in 2000 when Kilby was Awarded the Nobel Prize– in which Noyce, who had died in 1990, did not share.

Noyce (left) and Kilby (right)





Written by LW

April 25, 2019 at 1:01 am

“Knowledge speaks, but wisdom listens”*…



You might think that digital technologies, often considered a product of ‘the West,’ would hasten the divergence of Eastern and Western philosophies. But within the study of Vedanta, an ancient Indian school of thought, I see the opposite effect at work. Thanks to our growing familiarity with computing, virtual reality (VR) and artificial intelligence (AI), ‘modern’ societies are now better placed than ever to grasp the insights of this tradition.

Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century…

Philosopher and Vaishnava Hindu theologian Akhandadhi Das. a member of the Science and Philosophy Initiative, explains how “Modern technology is akin to the metaphysics of Vedanta.”

* Jimi Hendrix


As we muse on metaphor, we might send carefully-constructed birthday greetings to Donald Knuth; he was born on this date in 1938. A computer scientist, mathematician, and professor emeritus at Stanford, he made numerous substantive contributions to computer science, both practically and theoretically.  But he is probably best known as the author of the multi-volume work The Art of Computer Programming, which he began in 1962, began to publish in 1968… and has (via multiple revisions/additions) still not finished.  Called by the New York Times “the profession’s defining treatise,” it won Knuth the Turing Award in 1974.

That said, it’s surely worth noting Knuth’s other major contribution to our modern zeitgeist: his “Potrzebie System of Weights and Measures,” published in Issue 33 of Mad Magazine when he was 19 years old.

192px-knuthatopencontentalliance source


Written by LW

January 10, 2019 at 1:01 am

“I have an existential map. It has ‘you are here’ written all over it.”*…



A detail from illustrator James Turner‘s Map of Humanity.


A long time ago, I made a map of the rationalist community.  This is in the same geographic-map-of-something-non-geographic tradition as the Greater Ribbonfarm Cultural Region or xkcd’s map of the Internet. There’s even some sort of therapy program that seems to involve making a map like this of your life, though I don’t know how seriously they take it.

There’s no good name for this art and it’s really hard to Google. If you try “map of abstract concept” you just get a bunch of concept maps. It seems the old name, from back when this was a popular Renaissance amusement, is “sentimental cartography”, since it was usually applied to sentiments like love or sorrow. This isn’t great – the Internet’s not a sentiment – but it’s what we’ve got and I’ll do what I can to try to make it catch on…

See the marvelous examples (like the one above) collected by Scott Alexander at “Sentimental Cartography.”

* Steven Wright


As we find our place, we might spare a thought for Seymour Papert; he died on this date in 2016.  Trained as a mathematician, Papert was a pioneer of computer science, and in particular, artificial intelligence. He created the Epistemology and Learning Research Group at the MIT Architecture Machine Group (which later became the MIT Media Lab); he directed MIT’s Artificial Intelligence Laboratory; he authored the hugely-influential LOGO computer language; and he was a principal of the One Laptop Per Child Program.  Called by Marvin Minsky “the greatest living mathematics educator,” Papert won a Guggenheim fellowship (1980), a Marconi International fellowship (1981), the Software Publishers Association Lifetime Achievement Award (1994), and the Smithsonian Award (1997).


Written by LW

July 31, 2018 at 1:01 am

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…


When warning about the dangers of artificial intelligence, many doomsayers cite philosopher Nick Bostrom’s paperclip maximizer thought experiment. [See here for an amusing game that demonstrates Bostrom’s fear.]

Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.

Harvard cognitive scientist Joscha Bach, in a tongue-in-cheek tweet, has countered this sort of idea with what he calls “The Lebowski Theorem”:

No superintelligent AI is going to bother with a task that is harder than hacking its reward function.

Why it’s cool to take Bobby McFerrin’s advice at: “The Lebowski Theorem of machine superintelligence.”

* Alan Kay


As we get down with the Dude, we might send industrious birthday greetings to prolific writer Anthony Trollope; he was born on this date in 1815.  Trollope wrote 47 novels, including those in the “Chronicles of Barsetshire” and “Palliser” series (along with short stories and occasional prose).  And he had a successful career as a civil servant; indeed, among his works the best known is surely not any of his books, but the iconic red British mail drop, the “pillar box,” which he invented in his capacity as Postal Surveyor.

 The end of a novel, like the end of a children’s dinner-party, must be made up of sweetmeats and sugar-plums.  (source)


“We shape our tools, and thereafter our tools shape us”*…


Bell Labs engineer Billy Klüver working on Oracle (1965), a collaboration with Robert Rauschenberg

Since it was first set-up in 1907, Bell Labs has been at the forefront of scientific invention. During its peak, work undertaken at the labs led to the invention of the laser and the transistor, the birth of information theory and the creation of C, S and C++ programming languages, which form the basis of coding today. Bell Labs has been awarded a total of eight Nobel Peace prizes and every Silicon Valley start-up or global conglomerate has mined the mythology around its unique ability to foster new ideas for clues as to how one research laboratory could consistently turn out such an array of successful technologies…

During the 1960s and 1970s… Bell Labs turned the research centre into a playground for the likes of John Cage, Robert Rauschenberg and most of New York’s Lower East Side art scene…

The extraordinary tale of EAT (Experiments in Art and Technology), engineer Billy Klüver’s attempt to “make technology more human”– at “How AT&T shaped modern art.”

Then, by way of sampling the results, check out “9 Evenings,” a 1965 project exploring avant-garde theatre, dance and new technologies. Artists John Cage, Lucinda Childs, Öyvind Fahlström, Alex Hay, Deborah Hay, Steve Paxton, Yvonne Rainer, Robert Rauschenberg, David Tudor and Robert Whitman each worked with a Bell Labs engineer to create an original performance.

(AT&T is, of course, long gone; but Bell Labs lives on as part of Nokia– and EAT continues.)

* Marshall McLuhan


As we celebrate collaboration, we might email elegantly and creatively designed birthday greetings to Douglas Carl Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.



%d bloggers like this: