Posts Tagged ‘computers’
“They laughed at Columbus and they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”*…

Most technologies that grow up to be important, Benedict Evans observes, start out looking like toys with little or no practical application.
Some of the most important things of the last 100 years or so looked like this. Aircraft, cars, telephones, mobile phones and personal computers were all dismissed as toys. “Well done Mr Wright – you flew over a few sand dunes. Why do we care?”
But on the other hand, plenty of things that looked like useless toys never did become anything more than that. The fact that people laughed at X and X then started working does not tell us that if people now laugh Y or Z, those will work too.
So, we have a pair of equal and opposite fallacies. There is no predictive value in saying ‘that doesn’t work’ or ‘that looks like a toy’, and there is also no predictive value in saying ‘people always say that.’ As [Wolfgang] Pauli put it, statements like this are ‘not even wrong’ – they give no insight into what will happen.
Instead, you have to go one level further. You need a theory for why this will get better, or why it won’t, and for why people will change their behaviour, or for why they won’t…
That’s to say, Evans suggests, you need to be able to envision a roadmap from “toy” to wide, practical use…
These roadmaps can come in steps. It took quite a few steps to get from the [Wright Flier, pictured above left] to something that made ocean liners obsolete, and each of those steps were useful. The PC also came in steps – from hobbyists to spreadsheets to web browsers. The same thing for mobile – we went from expensive analogue phones for a few people to cheap GSM phones for billions of people to smartphones that changed what mobile meant. But there was always a path. The Apple 1, Netscape and the iPhone all looked like impractical toys that ‘couldn’t be used for real work’, but there were obvious roadmaps to change that – not necessarily all the way to the future, but certainly to a useful next step.
Equally, sometimes the roadmap is ‘forget about this for 20 years’. The Newton or the IBM Simon were just too early, as was the first wave of VR in the 80s and 90s. You could have said, deterministically, that Moore’s Law would make VR or pocket computers useful at some point, so there was notionally a roadmap, but the roadmap told you to work on something else. This is different to the Rocket Belt [pictured above right], where there was no foreseeable future development that would make it work…
Much the same sort of questions apply to the other side of the problem – even if this did get very cheap and very good, who would use it? You can’t do a waterfall chart of an engineering roadmap here, but you can again ask questions – what would have to change? Are you proposing a change in human nature, or a different way of expressing it? What’s your theory of why things will change or why they won’t?
The thread through all of this is that we don’t know what will happen, but we do know what could happen – we don’t know the answer, but we can at least ask useful questions. The key challenge to any assertion about what will happen, I think, is to ask ‘well, what would have to change?’ Could this happen, and if it did, would it work? We’re always going to be wrong sometimes, but we can try to be wrong for the right reasons…
A practical approach to technology forecasting: “Not even wrong: predicting tech,” from @benedictevans.
* Carl Sagan
###
As we ponder prospects, we might send carefully-calculated birthday greetings to J. Presper Eckert; he was born on this date in 1919. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“On the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on our way to using it to create a world of suffering and chaos. Paradoxical, no?”*…
Joseph Weizenbaum, a distinguished professor at MIT, was one of the fathers of artificial intelligence and computing as we know it; he was also one of his earliest critics– one whose concerns remain all too current. After a review of his warnings, Librarian Shipwreck shares a still-relevant set of questions Weizenbaum proposed…
At the end of his essay “Once more—A Computer Revolution” which appeared in the Bulletin of the Atomic Scientists in 1978, Weizenbaum concluded with a set of five questions. As he put it, these were the sorts of questions that “are almost never asked” when it comes to this or that new computer related development. These questions did not lend themselves to simple yes or no answers, but instead called for serious debate and introspection. Thus, in the spirit of that article, let us conclude this piece not with definitive answers, but with more questions for all of us to contemplate. Questions that were “almost never asked” in 1978, and which are still “almost never asked” in 2023. They are as follows:
• Who is the beneficiary of our much-advertised technological progress and who are its victims?
• What limits ought we, the people generally and scientists and engineers particularly, to impose on the application of computation to human affairs?
• What is the impact of the computer, not only on the economies of the world or on the war potential of nations, etc…but on the self-image of human beings and on human dignity?
• What irreversible forces is our worship of high technology, symbolized most starkly by the computer, bringing into play?
• Will our children be able to live with the world we are here and now constructing?
As Weizenbaum put it “much depends on answers to these questions.”
Much still depends on answers to these questions.
Eminently worth reading in full: “‘Computers enable fantasies’ – on the continued relevance of Weizenbaum’s warnings,” from @libshipwreck.
See also: “An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum.”
* Joseph Weizenbaum (1983)
###
As we stay grounded, we might spare a thought for George Stibitz; he died on this date in 1995. A Bell Labs researcher, he was known for his work in the 1930s and 1940s on the realization of Boolean logic digital circuits using electromechanical relays as the switching element– work for which he is internationally recognized as one of the fathers of the modern digital computer.
In 1937, Stibitz, a scientist at Bell Laboratories built a digital machine based on relays, flashlight bulbs, and metal strips cut from tin-cans. He called it the “Model K” because most of it was constructed on his kitchen table. It worked on the principle that if two relays were activated they caused a third relay to become active, where this third relay represented the sum of the operation. Then, in 1940, he gave a demonstration of the first remote operation of a computer.
“Those who can imagine anything, can create the impossible”*…
As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…
… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.
[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.
A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.
“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…
Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.
###
As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

“If you are confused by the underlying principles of quantum technology – you get it!”*…
A tour through the map above– a helpful primer on the origins, development, and possible futures of quantum computing…
From Dominic Walliman (@DominicWalliman) on @DomainOfScience.
* Kevin Coleman
###
As we embrace uncertainty, we might spare a thought for
Alan Turing; he died on this date in 1954. A British mathematician, he was a foundational computer science pioneer (inventor of the Turing Machine, creator of the “Turing Test” (perhaps to b made more relevant by quantum computing :), and inspiration for “The Turing Award“) and cryptographer (leading member of the team that cracked the Enigma code during WWII).







You must be logged in to post a comment.