Posts Tagged ‘prediction’
“They laughed at Columbus and they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”*…

Most technologies that grow up to be important, Benedict Evans observes, start out looking like toys with little or no practical application.
Some of the most important things of the last 100 years or so looked like this. Aircraft, cars, telephones, mobile phones and personal computers were all dismissed as toys. “Well done Mr Wright – you flew over a few sand dunes. Why do we care?”
But on the other hand, plenty of things that looked like useless toys never did become anything more than that. The fact that people laughed at X and X then started working does not tell us that if people now laugh Y or Z, those will work too.
So, we have a pair of equal and opposite fallacies. There is no predictive value in saying ‘that doesn’t work’ or ‘that looks like a toy’, and there is also no predictive value in saying ‘people always say that.’ As [Wolfgang] Pauli put it, statements like this are ‘not even wrong’ – they give no insight into what will happen.
Instead, you have to go one level further. You need a theory for why this will get better, or why it won’t, and for why people will change their behaviour, or for why they won’t…
That’s to say, Evans suggests, you need to be able to envision a roadmap from “toy” to wide, practical use…
These roadmaps can come in steps. It took quite a few steps to get from the [Wright Flier, pictured above left] to something that made ocean liners obsolete, and each of those steps were useful. The PC also came in steps – from hobbyists to spreadsheets to web browsers. The same thing for mobile – we went from expensive analogue phones for a few people to cheap GSM phones for billions of people to smartphones that changed what mobile meant. But there was always a path. The Apple 1, Netscape and the iPhone all looked like impractical toys that ‘couldn’t be used for real work’, but there were obvious roadmaps to change that – not necessarily all the way to the future, but certainly to a useful next step.
Equally, sometimes the roadmap is ‘forget about this for 20 years’. The Newton or the IBM Simon were just too early, as was the first wave of VR in the 80s and 90s. You could have said, deterministically, that Moore’s Law would make VR or pocket computers useful at some point, so there was notionally a roadmap, but the roadmap told you to work on something else. This is different to the Rocket Belt [pictured above right], where there was no foreseeable future development that would make it work…
Much the same sort of questions apply to the other side of the problem – even if this did get very cheap and very good, who would use it? You can’t do a waterfall chart of an engineering roadmap here, but you can again ask questions – what would have to change? Are you proposing a change in human nature, or a different way of expressing it? What’s your theory of why things will change or why they won’t?
The thread through all of this is that we don’t know what will happen, but we do know what could happen – we don’t know the answer, but we can at least ask useful questions. The key challenge to any assertion about what will happen, I think, is to ask ‘well, what would have to change?’ Could this happen, and if it did, would it work? We’re always going to be wrong sometimes, but we can try to be wrong for the right reasons…
A practical approach to technology forecasting: “Not even wrong: predicting tech,” from @benedictevans.
* Carl Sagan
###
As we ponder prospects, we might send carefully-calculated birthday greetings to J. Presper Eckert; he was born on this date in 1919. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“All the world is made of faith, and trust, and pixie dust”*…
Beyond the Prisoner’s Dilemma— an interactive guide to game theory and why we trust each other: The Evolution of Trust, from Nicky Case (@ncasenmare), via @frauenfelder@mastodon.cloud in @Recomendo6.
* J. M. Barrie, Peter Pan
###
As we rethink reciprocal reliance, we might send far-sighted birthday greetings to Michel de Nostredame; he was born on this date in 1503. Better known as Nostradamus, he was an astrologer, apothecary, physician, and reputed seer, who is best known for his book Les Prophéties (published in 1555), a collection of 942 poetic quatrains allegedly predicting future events.
In the years since the publication of his Les Prophéties, Nostradamus has attracted many supporters, who, along with some of the popular press, credit him with having accurately predicted many major world events. Other, more critical, observers note that many of his supposed correct calls were the result of “generous” (or plainly incorrect) translations/interpretations; and more generally, that Nostradamus’ genius for vagueness allows– indeed encourages– enthusiasts to “find” connections where they may or may not exist.
“Prediction is very difficult, especially if it’s about the future”*…
… but maybe not as hard as it once was. While multi-agent artificial intelligence was first used in the sixties, advances in technology have made it an extremely sophisticated modeling– and prediction– tool. As Derek Beres explains, it can be a powerfully-accurate prediction engine… and it can potentially also be an equally powerful tool for manipulation…
The debate over free will is ancient, yet data don’t lie — and we have been giving tech companies access to our deepest secrets… We like to believe we’re not predictable, but that’s simply not true…
Multi-agent artificial intelligence (MAAI) is predictive modeling at its most advanced. It has been used for years to create digital societies that mimic real ones with stunningly accurate results. In an age of big data, there exists more information about our habits — political, social, fiscal — than ever before. As we feed them information on a daily basis, their ability to predict the future is getting better.
[And] given the current political climate around the planet… MAAI will most certainly be put to insidious means. With in-depth knowledge comes plenty of opportunities for exploitation and manipulation, no deepfake required. The intelligence might be artificial, but the target audience most certainly is not…
Move over deepfakes; multi-agent artificial intelligence is poised to manipulate your mind: “Can AI simulations predict the future?,” from @derekberes at @bigthink.
[Image above: source]
* Niels Bohr
###
As we analyze augury, we might note that today is National Computer Security Day. It was inaugurated by the Association for Computing Machinery (ACM) in 1988, shortly after an attack on ARPANET (the forerunner of the internet as we know it) that damaged several of the connected machines. Meant to call attention to the need for constant need for attention to security, it’s a great day to change all of one’s passwords.
“So these are the ropes, The tricks of the trade, The rules of the road”*…
Morgan Housel shares a few thing with which he’s come to terms…
Everyone belongs to a tribe and underestimates how influential that tribe is on their thinking.
Most of what people call “conviction” is a willful disregard for new information that might make you change your mind. That’s when beliefs turn dangerous.
History is driven by surprising events but forecasting is driven by obvious ones.
People learn when they’re surprised. Not when they read the right answer, or are told they’re doing it wrong, but when they experience a gap between expectations and reality.
“Learn enough from history to respect one another’s delusions.” -Will Durant
Your personal experiences make up maybe 0.00000001% of what’s happened in the world but maybe 80% of how you think the world works.
Unsustainable things can last longer than you anticipate.
It’s hard to tell the difference between boldness and recklessness, ambition and greed, contrarian and wrong.
There are two types of information: stuff you’ll still care about in the future, and stuff that matters less and less over time. Long-term vs. expiring knowledge. It’s critical to identify which is which when you come across something new.
Small risks are overblown because they’re easy to talk about, big risks are discounted and ignored because they seem preposterous before they arrive.
You can’t believe in risk without also believing in luck because they are fundamentally the same thing—an acknowledgment that things outside of your control can have a bigger impact on outcomes than anything you do on your own.
Once-in-a-century events happen all the time because lots of unrelated things can go wrong. If there’s a 1% chance of a new disastrous pandemic, a 1% chance of a crippling depression, a 1% chance of a catastrophic flood, a 1% chance of political collapse, and on and on, then the odds that something bad will happen next year – or any year – are … pretty good. It’s why Arnold Toynbee says history is “just one damn thing after another.”
Many more affecting aphorisms at: “Little Rules About Big Things,” from @morganhousel @collabfund.
* “Rules Of The Road,” by Cy Coleman and Caroline Leigh (famously recorded by Tony Bennett and Nat King Cole)
###
As we ponder precepts, we might send prophylactic birthday greetings to Samuel W. Alderson; he was born on this date in 1914. A physicist and engineer of broad accomplishment, Alderson is probably best remembered as the inventor of the crash test dummy. Alderson created his first dummies in 1956 to test jet ejection seats for the military. But with the passage of the Highway Traffic and Motor Vehicle Safety Act in 1966 (on the heels of the stir created by Ralph Nader’s Unsafe at Any Speed), Alderson found a much broader market. (From the first experiments on car safety in the 1930s, cadavers had been used to assess risk and damage; the dummy had obvious advantages.) Alderson continuously improved his dummies, and later branched out to produce medical “phantoms” for simulations– e.g., synthetic wounds that ooze mock blood.

“It is difficult to predict, especially the future”*…
While perfectly accurate prediction is beyond our ken, it is possible to spot indicators– early warning signs or signals– that the world is headed in one direction or another. BBC R&D Futures is an attempt to do exactly that (in the service of building understanding of the impacts that they might have, right down to the artifacts that they might spawn). For example (from a recent “signals” newsletter):
We are in an era of increasing protests
A number of recent studies tracking and understanding protests and demonstrations around the world are seeing a rise in events. One looking at demonstrations between 2006 and 2020 found that “the number of protest movements around the world had more than tripled in less than 15 years. Every region saw an increase, the study found, with some of the largest protest movements ever recorded.”
Common reasons given for protesting were ‘perceived failure of political systems or representation’, inequality, corruption, lack of action over climate change, and the sense that people’s concerns are not being addressed.
- Why is the world protesting so much? A new study claims to have some answers. – The Washington Post
- World Protests | SpringerLink
- Political protests have become more widespread and more frequent | The Economist
- The Age of Mass Protests: Understanding an Escalating Global Trend | Center for Strategic and International Studies
- Global Protest Tracker – Carnegie Endowment for International Peace
Concerned about the future? A useful source of social, economic, technological, environmental, and political dynamics worthy of attention: BBC R&D Futures.
* Niels Bohr
###
As we seek signs, we might recall that it was on this date in 1876 that Melville Bissell patented the first carpet sweeper… one of the innovations (see here, here, e.g.) that revolutionized housekeeping… and with it, modern society.
You must be logged in to post a comment.