(Roughly) Daily

Posts Tagged ‘computers

“Machines take me by surprise with great frequency”*…

In search of universals in the 17th century, Gottfried Leibniz imagined the calculus ratiocinator, a theoretical logical calculation framework aimed at universal application, that led Norbert Wiener suggested that Leibniz should be considered the patron saint of cybernetics. In the 19th century, Charles Babbage and Ada Lovelace took a pair of whacks at making it real.

Ironically, it was confronting the impossibility of a universal calculator that led to modern computing. In 1936 (the same year that Charlie Chaplin released Modern Times) Alan Turing (following on Godel’s demonstration that mathematics is incomplete and addressing Hilbert‘s “decision problem,” querying the limits of computation) published the (notional) design of a “machine” that elegantly demonstrated those limits– and, as Sheon Han explains, birthed computing as we know it…

… [Hilbert’s] question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.

The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It’s abstract because it doesn’t (and can’t) physically exist as a tangible device. Instead, it’s a conceptual model of computation: If the machine can calculate a function, then the function is computable.

With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process — a set of instructions, which today we’d call an algorithm — that can always determine whether a given statement is true?…

… in 1936, Church and Turing — using different methods — independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway’s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.

Beyond answering these fundamental questions, Turing’s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today’s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture — called the von Neumann architecture — that made the universal Turing machine concept possible in a real-life machine…

As Turing said, “if a machine is expected to be infallible, it cannot also be intelligent.” On the importance of thought experiments: “The Most Important Machine That Was Never Built,” from @sheonhan in @QuantaMagazine.

* Alan Turing

###

As we sum it up, we might spare a thought for Martin Gardner; he died on this date in 2010.  Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

 source

Written by (Roughly) Daily

May 22, 2023 at 1:00 am

“They laughed at Columbus and they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”*…

The Wright Flier could only fly 200 meters, but there was a clear path to make it better. The Rocket Belt flew for 21 seconds because it used almost a liter of fuel per second, and to fly like this for half a hour you’d need almost two tonnes of fuel, which you can’t carry that on your back. There was no path to make it better without changing the laws of physics. (There’s no hindsight or survivor bias at work here– we knew it in 1962.)

Most technologies that grow up to be important, Benedict Evans observes, start out looking like toys with little or no practical application.

Some of the most important things of the last 100 years or so looked like this. Aircraft, cars, telephones, mobile phones and personal computers were all dismissed as toys. “Well done Mr Wright – you flew over a few sand dunes. Why do we care?”

But on the other hand, plenty of things that looked like useless toys never did become anything more than that. The fact that people laughed at X and X then started working does not tell us that if people now laugh Y or Z, those will work too.

So, we have a pair of equal and opposite fallacies. There is no predictive value in saying ‘that doesn’t work’ or ‘that looks like a toy’, and there is also no predictive value in saying ‘people always say that.’ As [Wolfgang] Pauli put it, statements like this are ‘not even wrong’ – they give no insight into what will happen.

Instead, you have to go one level further. You need a theory for why this will get better, or why it won’t, and for why people will change their behaviour, or for why they won’t…

That’s to say, Evans suggests, you need to be able to envision a roadmap from “toy” to wide, practical use…

These roadmaps can come in steps. It took quite a few steps to get from the [Wright Flier, pictured above left] to something that made ocean liners obsolete, and each of those steps were useful. The PC also came in steps – from hobbyists to spreadsheets to web browsers. The same thing for mobile – we went from expensive analogue phones for a few people to cheap GSM phones for billions of people to smartphones that changed what mobile meant. But there was always a path. The Apple 1, Netscape and the iPhone all looked like impractical toys that ‘couldn’t be used for real work’, but there were obvious roadmaps to change that – not necessarily all the way to the future, but certainly to a useful next step.

Equally, sometimes the roadmap is ‘forget about this for 20 years’. The Newton or the IBM Simon were just too early, as was the first wave of VR in the 80s and 90s. You could have said, deterministically, that Moore’s Law would make VR or pocket computers useful at some point, so there was notionally a roadmap, but the roadmap told you to work on something else. This is different to the Rocket Belt [pictured above right], where there was no foreseeable future development that would make it work…

Much the same sort of questions apply to the other side of the problem – even if this did get very cheap and very good, who would use it? You can’t do a waterfall chart of an engineering roadmap here, but you can again ask questions – what would have to change? Are you proposing a change in human nature, or a different way of expressing it? What’s your theory of why things will change or why they won’t?

The thread through all of this is that we don’t know what will happen, but we do know what could happen – we don’t know the answer, but we can at least ask useful questions. The key challenge to any assertion about what will happen, I think, is to ask ‘well, what would have to change?’ Could this happen, and if it did, would it work? We’re always going to be wrong sometimes, but we can try to be wrong for the right reasons…

A practical approach to technology forecasting: “Not even wrong: predicting tech,” from @benedictevans.

* Carl Sagan

###

As we ponder prospects, we might send carefully-calculated birthday greetings to J. Presper Eckert; he was born on this date in 1919. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

Eckert (standing and gesturing) and Mauchy (at the console), demonstrating the UNIVAC to Walter Cronkite (source)

Written by (Roughly) Daily

April 9, 2023 at 1:00 am

“The functionalist organization, by privileging progress (i.e. time), causes the condition of its own possibility”*…

Meet the new boss, painfully similar to the old boss…

While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…

From Henry Farrell and Marion Fourcade, a reminder that’s what’s old is new again: “The Moral Economy of High-Tech Modernism,” in an issue of Daedalus, edited by Farrell and Margaret Levi (@margaretlevi).

See also: “The Algorithm Society and Its Discontents” (or here) by Brad DeLong (@delong).

Apposite: “What Greek myths can teach us about the dangers of AI.”

(Image above: source)

* “The functionalist organization, by privileging progress (i.e. time), causes the condition of its own possibility–space itself–to be forgotten: space thus becomes the blind spot in a scientific and political technology. This is the way in which the Concept-city functions: a place of transformations and appropriations, the object of various kinds of interference but also a subject that is constantly enriched by new attributes, it is simultaneously the machinery and the hero of modernity.” – Michel de Certeau

###

As we ponder platforms, we might recall that it was on this date in 1955 that the first computer operating system was demonstrated…

Computer pioneer Doug Ross demonstrates the Director tape for MIT’s Whirlwind machine. It’s a new idea: a permanent set of instructions on how the computer should operate.

Six years in the making, MIT’s Whirlwind computer was the first digital computer that could display real-time text and graphics on a video terminal, which was then just a large oscilloscope screen. Whirlwind used 4,500 vacuum tubes to process data…

Another one of its contributions was Director, a set of programming instructions…

March 8, 1955: The Mother of All Operating Systems

The first permanent set of instructions for a computer, it was in essence the first operating system. Loaded by paper tape, Director allowed operators to load multiple problems in Whirlwind by taking advantage of newer, faster photoelectric tape reader technology, eliminating the need for manual human intervention in changing tapes on older mechanical tape readers.

Ross explaining the system (source)

“On the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on our way to using it to create a world of suffering and chaos. Paradoxical, no?”*…

Joseph Weizenbaum, a distinguished professor at MIT, was one of the fathers of artificial intelligence and computing as we know it; he was also one of his earliest critics– one whose concerns remain all too current. After a review of his warnings, Librarian Shipwreck shares a still-relevant set of questions Weizenbaum proposed…

At the end of his essay “Once more—A Computer Revolution” which appeared in the Bulletin of the Atomic Scientists in 1978, Weizenbaum concluded with a set of five questions. As he put it, these were the sorts of questions that “are almost never asked” when it comes to this or that new computer related development. These questions did not lend themselves to simple yes or no answers, but instead called for serious debate and introspection. Thus, in the spirit of that article, let us conclude this piece not with definitive answers, but with more questions for all of us to contemplate. Questions that were “almost never asked” in 1978, and which are still “almost never asked” in 2023. They are as follows:

• Who is the beneficiary of our much-advertised technological progress and who are its victims?

• What limits ought we, the people generally and scientists and engineers particularly, to impose on the application of computation to human affairs?

• What is the impact of the computer, not only on the economies of the world or on the war potential of nations, etc…but on the self-image of human beings and on human dignity?

• What irreversible forces is our worship of high technology, symbolized most starkly by the computer, bringing into play?

• Will our children be able to live with the world we are here and now constructing?

As Weizenbaum put it “much depends on answers to these questions.”

Much still depends on answers to these questions.

Eminently worth reading in full: “‘Computers enable fantasies’ – on the continued relevance of Weizenbaum’s warnings,” from @libshipwreck.

See also: “An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum.”

* Joseph Weizenbaum (1983)

###

As we stay grounded, we might spare a thought for George Stibitz; he died on this date in 1995. A Bell Labs researcher, he was known for his work in the 1930s and 1940s on the realization of Boolean logic digital circuits using electromechanical relays as the switching element– work for which he is internationally recognized as one of the fathers of the modern digital computer.

In 1937, Stibitz, a scientist at Bell Laboratories built a digital machine based on relays, flashlight bulbs, and metal strips cut from tin-cans. He called it the “Model K” because most of it was constructed on his kitchen table. It worked on the principle that if two relays were activated they caused a third relay to become active, where this third relay represented the sum of the operation. Then, in 1940, he gave a demonstration of the first remote operation of a computer.

source

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing

###

As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame

source

%d bloggers like this: