Posts Tagged ‘John von Neumann’
“Everything we care about lies somewhere in the middle, where pattern and randomness interlace”*…
We tend dramatically to underestimate the role of randomness in the world…
Arkansas was one out away from the 2018 College World Series championship, leading Oregon State in the series and 3-2 in the ninth inning of the game when Cadyn Grenier lofted a foul pop down the right-field line. Three Razorbacks converged on the ball and were in position to make a routine play on it, only to watch it fall untouched to the ground in the midst of them. Had any one of them made the play, Arkansas would have been the national champion.
Nobody did.
Given “another lifeline,” Grenier hit an RBI single to tie the game before Trevor Larnach launched a two-run homer to give the Beavers a 5-3 lead and, ultimately, the game. “As soon as you see the ball drop, you know you have another life,” Grenier said. “That’s a gift.” The Beavers accepted the gift eagerly and went on win the championship the next day as Oregon State rode freshman pitcher Kevin Abel to a 5-0 win over Arkansas in the deciding game of the series. Abel threw a complete game shutout and retired the last 20 hitters he faced.
The highly unlikely happens pretty much all the time…
We readily – routinely – underestimate the power and impact of randomness in and on our lives. In his book, The Drunkard’s Walk, Caltech physicist Leonard Mlodinow employs the idea of the “drunkard’s [random] walk” to compare “the paths molecules follow as they fly through space, incessantly bumping, and being bumped by, their sister molecules,” with “our lives, our paths from college to career, from single life to family life, from first hole of golf to eighteenth.”
Although countless random interactions seem to cancel each another out within large data sets, sometimes, “when pure luck occasionally leads to a lopsided preponderance of hits from some particular direction…a noticeable jiggle occurs.” When that happens, we notice the unlikely directional jiggle and build a carefully concocted story around it while ignoring the many, many random, counteracting collisions.
As Tversky and Kahneman have explained, “Chance is commonly viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium. In fact, deviations are not ‘corrected’ as a chance process unfolds, they are merely diluted.”
…
As Stephen Jay Gould famously argued, were we able to recreate the experiment of life on Earth a million different times, nothing would ever be the same, because evolution relies upon randomness. Indeed, the essence of history is contingency.
Randomness rules.
…
Luck matters. A lot. Yet, we tend dramatically to underestimate the role of randomness in the world.
The self-serving bias is our tendency to see the good stuff that happens as our doing (“we worked really hard and executed the game plan well”) while the bad stuff isn’t our fault (“It just wasn’t our night” or “we simply couldn’t catch a break” or “we would have won if the umpiring hadn’t been so awful”). Thus, desirable results are typically due to our skill and hard work — not luck — while lousy results are outside of our control and the offspring of being unlucky.
Two fine books undermine this outlook by (rightly) attributing a surprising amount of what happens to us — both good and bad – to luck. Michael Mauboussin’s The Success Equation seeks to untangle elements of luck and skill in sports, investing, and business. Ed Smith’s Luck considers a number of fields – international finance, war, sports, and even his own marriage – to examine how random chance influences the world around us. For example, Mauboussin describes the “paradox of skill” as follows: “As skill improves, performance becomes more consistent, and therefore luck becomes more important.” In investing, therefore (and for example), as the population of skilled investors has increased, the variation in skill has narrowed, making luck increasingly important to outcomes.
On account of the growth and development of the investment industry, John Bogle could quite consistently write his senior thesis at Princeton on the successes of active fund management and then go on to found Vanguard and become the primary developer and intellectual forefather of indexing. In other words, the ever-increasing aggregate skill (supplemented by massive computing power) of the investment world has come largely to cancel itself out.
…
After a big or revolutionary event, we tend to see it as having been inevitable. Such is the narrative fallacy. In this paper, ESSEC Business School’s Stoyan Sgourev notes that scholars of innovation typically focus upon the usual type of case, where incremental improvements rule the day. Sgourev moves past the typical to look at the unusual type of case, where there is a radical leap forward (equivalent to Thomas Kuhn’s paradigm shifts in science), as with Picasso and Les Demoiselles.
As Sgourev carefully argued, the Paris art market of Picasso’s time had recently become receptive to the commercial possibilities of risk-taking. Thus, artistic innovation was becoming commercially viable. Breaking with the past was then being encouraged for the first time. It would soon be demanded.
Most significantly for our purposes, Sgourev’s analysis of Cubism suggests that having an exceptional idea isn’t enough. For radical innovation really to take hold, market conditions have to be right, making its success a function of luck and timing as much as genius. Note that Van Gogh — no less a genius than Picasso — never sold a painting in his lifetime.
As noted above, we all like to think that our successes are earned and that only our failures are due to luck – bad luck. But the old expression – it’s better to be lucky than good – is at least partly true. That said, it’s best to be lucky *and* good. As a consequence, in all probabilistic fields (which is nearly all of them), the best performers dwell on process and diversify their bets. You should do the same…
As [Nate] Silver emphasizes in The Signal and the Noise, we readily overestimate the degree of predictability in complex systems [and t]he experts we see in the media are much too sure of themselves (I wrote about this problem in our industry from a slightly different angle…). Much of what we attribute to skill is actually luck.
Plan accordingly.
Taking the unaccountable into account: “Randomness Rules,” from Bob Seawright (@RPSeawright), via @JVLast
[image above: source]
* James Gleick, The Information: A History, a Theory, a Flood
###
As we contemplate chance, we might spare a thought for Oskar Morgenstern; he died on this date in 1977. An economist who fled Nazi Germany for Princeton, he collaborated with the mathematician John von Neumann to write Theory of Games and Economic Behavior, published in 1944, which is recognized as the first book on game theory— thus co-founding the field.
Game theory was developed extensively in the 1950s, and has become widely recognized as an important tool in many fields– perhaps especially in the study of evolution. Eleven game theorists have won the economics Nobel Prize, and John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory.
Game theory’s roots date back (at least) to the 1654 letters between Pascal and Fermat, which (along with work by Cardano and Huygens) marked the beginning of probability theory. (See Peter Bernstein’s marvelous Against the Gods.) The application of probability (Bayes’ rule, discrete and continuous random variables, and the computation of expectations) accounts for the utility of game theory; the role of randomness (along with the behavioral psychology of a game’s participants) explain why it’s not a perfect predictor.
“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”*…
There’s a depressing sort of symmetry in the fact that our modern paradigms of privacy were developed in response to the proliferation of photography and their exploitation by tabloids. The seminal 1890 Harvard Law Review article The Right to Privacy—which every essay about data privacy is contractually obligated to cite—argued that the right of an individual to object to the publication of photographs ought to be considered part of a general ‘right to be let alone’.
30 years on, privacy is still largely conceived of as an individual thing, wherein we get to make solo decisions about when we want to be left alone and when we’re comfortable being trespassed upon. This principle undergirds the notice-and-consent model of data management, which you might also know as the pavlovian response to click “I agree” on any popup and login screen with little regard for the forty pages of legalese you might be agreeing to.
The thing is, the right to be left alone makes perfect sense when you’re managing information relationships between individuals, where there are generally pretty clear social norms around what constitutes a boundary violation. Reasonable people can and do disagree as to the level of privacy they expect, but if I invite you into my home and you snoop through my bedside table and read my diary, there isn’t much ambiguity about that being an invasion.
But in the age of ✨ networked computing ✨, this individual model of privacy just doesn’t scale anymore. There are too many exponentially intersecting relationships for any of us to keep in our head. It’s no longer just about what we tell a friend or the tax collector or even a journalist. It’s the digital footprint that we often unknowingly leave in our wake every time we interact with something online, and how all of those websites and apps and their shadowy partners talk to each other behind our backs. It’s the cameras in malls tracking our location and sometimes emotions, and it’s the license plate readers compiling a log of our movements.
At a time when governments and companies are increasingly investing in surveillance mechanisms under the guise of security and transparency, that scale is only going to keep growing. Our individual comfort about whether we are left alone is no longer the only, or even the most salient part of the story, and we need to think about privacy as a public good and a collective value.
…
I like thinking about privacy as being collective, because it feels like a more true reflection of the fact that our lives are made up of relationships, and information about our lives is social and contextual by nature. The fact that I have a sister also indicates that my sister has at least one sibling: me. If I took a DNA test through 23andme I’m not just disclosing information about me but also about everyone that I’m related to, none of whom are able to give consent. The privacy implications for familial DNA are pretty broad: this information might be used to sell or withhold products and services, expose family secrets, or implicate a future as-yet-unborn relative in a crime. I could email 23andme and ask them to delete my records, and they might eventually comply in a month or three. But my present and future relatives wouldn’t be able to do that, or even know that their privacy had been compromised at all.
Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us. I might think nothing of posting a photo of going out with my friends and mentioning the name of the bar, but I’ve just exposed our physical location to the internet. If one of my friends has had to deal with a stalker in their past, I could’ve put their physical safety at risk. Even if I’m careful to make the post friends-only, the people I trust are not the same as the people my friends trust. In an individual model of privacy, we are only as private as our least private friend.
Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me.
…
Data collection isn’t always bad, but it is always risky. Sometimes that’s due to shoddy design and programming or lazy security practices. But even the best engineers often fail to build risk-free systems, by the very nature of systems.
Systems are easier to attack than they are to defend. If you want to defend a system, you have to make sure every part of it is perfectly implemented to guard against any possible vulnerabilities. Oftentimes, trying to defend a system means adding additional components, which just ends up creating more potential weak points. Whereas if you want to attack, all you have to do is find the one weakness that the systems designer missed. (Or, to paraphrase the IRA, you only have to be lucky once.)
This is true of all systems, digital or analog, but the thing that makes computer systems particularly vulnerable is that the same weaknesses can be deployed across millions of devices, in our phones and laptops and watches and toasters and refrigerators and doorbells. When a vulnerability is discovered in one system, an entire class of devices around the world is instantly a potential target, but we still have to go fix them one by one.
This is how the Equifax data leak happened. Equifax used a piece of open source software that had a security flaw in it, the people who work on that software found it and fixed it, and instead of diligently updating their systems Equifax hit the snooze button for four months and let hackers steal hundreds of millions of customer records. And while Equifax is definitely guilty of aforementioned lazy security practices, this incident also illustrates how fragile computer systems are. From the moment this bug was discovered, every server in the world that ran that software was at risk.
What’s worse, in many cases people weren’t even aware that their data was stored with Equifax. If you’re an adult who has had a job or a phone bill or interacted with a bank in the last seven years, your identifying information is collected by Equifax whether you like it or not. The only way to opt out would have been to be among the small percentage of overwhelmingly young, poor, and racialized people who have no credit histories, which significantly limits the scope of their ability to participate in the economy. How do you notice-and-consent your way out of that?
…
There unfortunately isn’t one weird trick to save democracy, but that doesn’t mean there aren’t lessons we can learn from history to figure out how to protect privacy as a public good. The scale and ubiquity of computers may be unprecedented, but so is the scale of our collective knowledge…
Read the full piece (and you should) for Jenny Zhang‘s (@phirephoenix) compelling case that we should treat– and protect– privacy as a public good, and explanation of how we might do that: “Left alone, together.” TotH to Sentiers.
[image above: source]
* Edward Snowden
###
As we think about each other, we might recall that it was on this date in 1939 that the first government appropriation was made to the support the construction of the Harvard Mark I computer.
Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. It was completed in 1944 (and put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project).
The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.
“Nothing happens until something moves”*…
What determines our fate? To the Stoic Greek philosophers, fate is the external product of divine will, ‘the thread of your destiny’. To transcendentalists such as Henry David Thoreau, it is an inward matter of self-determination, of ‘what a man thinks of himself’. To modern cosmologists, fate is something else entirely: a sweeping, impersonal physical process that can be boiled down into a single, momentous number known as the Hubble Constant.
The Hubble Constant can be defined simply as the rate at which the Universe is expanding, a measure of how quickly the space between galaxies is stretching apart. The slightest interpretation exposes a web of complexity encased within that seeming simplicity, however. Extrapolating the expansion process backward implies that all the galaxies we can observe originated together at some point in the past – emerging from a Big Bang – and that the Universe has a finite age. Extrapolating forward presents two starkly opposed futures, either an endless era of expansion and dissipation or an eventual turnabout that will wipe out the current order and begin the process anew.
That’s a lot of emotional and intellectual weight resting on one small number…
How scientists pinned a single number on all of existence: “Fate of the Universe.”
[Readers might remember that the Big Bang wasn’t always an accepted paradigm— and that on-going research continues to surface challenges.]
* Albert Einstein
###
As we center ourselves, we might spare a thought for Kurt Friedrich Gödel; he died on this date in 1978. A logician, mathematician, and philosopher, he is considered (along with Aristotle, Alfred Tarski— whose birthday this also is– and Gottlob Frege) to be one of the most important logicians in history. Gödel had an immense impact upon scientific and philosophical thinking in the 20th century. He is, perhaps, best remembered for his Incompleteness Theorems, which led to (among other important results) Alan Turing’s insights into computational theory.
Kurt Gödel’s achievement in modern logic is singular and monumental – indeed it is more than a monument, it is a landmark which will remain visible far in space and time. … The subject of logic has certainly completely changed its nature and possibilities with Gödel’s achievement. — John von Neumann
You must be logged in to post a comment.