(Roughly) Daily

Posts Tagged ‘statistics

“It is a capital mistake to theorize before one has data”*…

The estimable Claudia Sahm on what the elimination of an obscure advisory committee on economic data says about the administration’s commitment to relevance and accuracy…

In a time of great economic uncertainty, President Donald Trump’s administration quietly took a step last week that could create even more: Secretary of Commerce Howard Lutnick disbanded the Federal Economic Statistics Advisory Committee.

I realize that the shuttering of an obscure statistical advisory committee may not strike anyone as a scandal, much less an outrage. But as an economist who has presented to the committee, known as FESAC, I know how it improved the information used by both the federal government and private enterprise to make economic decisions. Most Americans do not realize how many aspects of their lives rely on timely and accurate government data.

One of FESAC’s official responsibilities was “exploring ways to enhance the agencies’ economic indicators to make them timelier, more accurate, and more specific to meeting changing demands and future data needs.” In the complex and highly dynamic US economy, this is an ongoing effort — not a one-time task that has been “fulfilled,” which was the Commerce Department’s stated reason for terminating the committee.

The 15 members of the advisory committee, who were unpaid, brought deep technical expertise on economic measurement from the private sector, academia and the non-profit world. They were a sounding board for the Census Bureau, Bureau of Labor Statistics, and Bureau of Economic Analysis, which produce much of the nation’s official statistics.

If statistics fail to keep up with the changing economy, they lose their usefulness. When the committee last met in December, one focus was on measuring the use and production of artificial intelligence. Staff from the agencies shared existing findings on AI, such as from the Business Trends and Outlook Survey that began in 2022, and outlined new data collection efforts. AI’s current use among businesses has nearly doubled since late 2023, and even more businesses expect to adopt AI in the next six months.

The committee was asked what data products would be most useful. Expert feedback, including a request to harmonize the definitions of AI across surveys and align with cutting-edge research, is especially valuable at the early stages of data collection. The growth and employment effects of AI are among the most pressing questions facing the economy, and external experts are crucial to supporting the creation of high-quality data.

Enhancing official economic statistics under budget constraints often requires creative approaches. At its meeting last June, the committee discussed using private-sector data to create statistics on regional employment and other outcomes. There is considerable demand among businesses and local governments to have timely geographic detail, but it is cost-prohibitive with current government surveys. Members of FESAC, some of whom work at companies like Indeed and JP Morgan Chase, offered first-hand knowledge of the pros and cons of using private-sector data.

The committee contributed far more than just twice-a-year meetings. It also created relationships with the private sector that government agencies could draw on as part of their continuing effort to improve their statistics.

The National Academies of Sciences, in discussing best practices for statistical agencies, argues that external advisory committees are a good way to engage with users of the data and obtain expert advice. Moreover, external evaluation should be part of regular program reviews to ensure quality, relevance and cost-effectiveness. That’s exactly what FESAC did.

The statistical agencies need more, not fewer, resources now to meet their challenges. During the campaign, Trump repeatedly questioned the credibility of US employment statistics. In particular, he claimed that the downward revisions of monthly payrolls showed political interference. Senators Bill Cassidy and Susan Collins asked the Bureau of Labor Statistics to explain why large revisions were happening and how to avoid them. FESAC could have been a valuable resource for possible improvements.

Disbanding FESAC does not advance the administration’s goal of greater efficiency in the government. In 2024, the committee’s cost was expected to be a modest $120,000, covering travel expenses and minimal staff support. Virtual-only meetings could have reduced those costs still further, if that was a concern. Regardless, the benefits to the millions of data users from regular reviews by external experts far exceed that negligible cost.

Putting a low-cost, high-value committee on the chopping block does not bode well for other investments in the official statistics. Reductions in staff and budget would likely degrade the quality of the official statistics. Even before Trump took office, all three agencies operated in a tight budget environment.

Reduced transparency in official statistics is perhaps the most troubling aspect of disbanding FESAC. Cutting off agency staff from external advisers creates an environment where political interference could occur much more easily — and go undetected. With political officials such as Lutnick arguing publicly that GDP should exclude government spending, it is especially important to have external, independent experts.

And FESAC is not alone. By executive order, the administration is ending several advisory committees in the federal government, reducing transparency and the technical resources for agencies. It’s a short-sighted approach that could undermine essential government services…

The War on Government Statistics Has Quietly Begun” (gift link) from @claudia-sahm.bsky.social in @bloomberg.com.

Apposite: “The True Cost of Trump’s Cuts to NOAA and NASA,” “Trump’s shocking purge of public health data, explained,” and “Trump USDA Sued for Erasing Webpages Vital to Farmers“… and so many– too many– others.

(Image above: source… note how many of the data sources cited are are precisely the sorts of government resources being targeted)

* Sherlock Holmes (Arthur Conan Doyle)

###

As we drive with our windows painted over, we might send understanding birthday greetings to Robert Heilbroner; he was born on this date in 1919.  An economist and historian of economic thought, he was the author of some two dozen books, the best known of which is The Worldly Philosophers: The Lives, Times and Ideas of the Great Economic Thinkers (1953), a remarkable survey of the lives and contributions of famous economists (perhaps most notably Adam Smith, Karl Marx, and John Maynard Keynes). Your correspondent can also recommend The Future as History (1960).

Heilbroner was considered highly unconventional by those in his field; indeed, he regarded himself a social theorist and “worldly philosopher” (philosopher pre-occupied with “worldly” affairs, such as economic structures) and tended to integrate the disciplines of history, economics, and philosophy into his work. Nonetheless, Heilbroner was recognized by his peers as a prominent economist and was elected vice president of the American Economic Association in 1972.

source

“Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital.”*…

Former Comptroller of the Currency Eugene Ludwig argues that, at least insofar as many (maybe most) Americans are concerned, unemployment is higher, wages are lower, and growth is less robust than government statistics suggest…

Before the presidential election, many Democrats were puzzled by the seeming disconnect between “economic reality” as reflected in various government statistics and the public’s perceptions of the economy on the ground. Many in Washington bristled at the public’s failure to register how strong the economy really was. They charged that right-wing echo chambers were conning voters into believing entirely preposterous narratives about America’s decline.

What they rarely considered was whether something else might be responsible for the disconnect — whether, for instance, government statistics were fundamentally flawed. What if the numbers supporting the case for broad-based prosperity were themselves misrepresentations? What if, in fact, darker assessments of the economy were more authentically tethered to reality?

On some level, I relate to the underlying frustrations. Having served as comptroller of the currency during the 1990s, I‘ve spent substantial chunks of my career exploring the gaps between public perception and economic reality, particularly in the realm of finance. Many of the officials I’ve befriended and advised over the last quarter-century — members of the Federal Reserve, those running regulatory agencies, many leaders in Congress — have told me they consider it their responsibility to set public opinion aside and deal with the economy as it exists by the hard numbers. For them, government statistics are thought to be as reliable as solid facts.

In recent years, however, as my focus has broadened beyond finance to the economy as a whole, the disconnect between “hard” government numbers and popular perception has spurred me to question that faith. I’ve had the benefit of living in two realms that seem rarely to intersect — one as a Washington insider, the other as an adviser to lenders and investors across the country. Toggling between the two has led me to be increasingly skeptical that the government’s measurements properly capture the realities defining unemployment, wage growth and the strength of the economy as a whole.

These numbers have time and again suggested to many in Washington that unemployment is low, that wages are growing for middle America and that, to a greater or lesser degree, economic growth is lifting all boats year upon year. But when traveling the country, I’ve encountered something very different…

… Within the nation’s capital, this gap in perception has had profound implications. For decades, a small cohort of federal agencies have reported many of the same economic statistics, using fundamentally the same methodology or relying on the same sources, at the same appointed times. Rarely has anyone ever asked whether the figures they release hew to reality. Given my newfound skepticism, I decided several years ago to gather a team of researchers under the rubric of the Ludwig Institute for Shared Economic Prosperity to delve deeply into some of the most frequently cited headline statistics.

What we uncovered shocked us. The bottom line is that, for 20 years or more, including the months prior to the election, voter perception was more reflective of reality than the incumbent statistics. Our research revealed that the data collected by the various agencies is largely accurate. Moreover, the people staffing those agencies are talented and well-intentioned. But the filters used to compute the headline statistics are flawed. As a result, they paint a much rosier picture of reality than bears out on the ground.

Take, as a particularly egregious example, what is perhaps the most widely reported economic indicator: unemployment. Known to experts as the U-3, the number misleads in several ways. First, it counts as employed the millions of people who are unwillingly under-employed — that is, people who, for example, work only a few hours each week while searching for a full-time job. Second, it does not take into account many Americans who have been so discouraged that they are no longer trying to get a job. Finally, the prevailing statistic does not account for the meagerness of any individual’s income. Thus you could be homeless on the streets, making an intermittent income and functionally incapable of keeping your family fed, and the government would still count you as “employed.”

I don’t believe those who went into this past election taking pride in the unemployment numbers understood that the near-record low unemployment figures — the figure was a mere 4.2 percent in November — counted homeless people doing occasional work as “employed.” But the implications are powerful. If you filter the statistic to include as unemployed people who can’t find anything but part-time work or who make a poverty wage (roughly $25,000), the percentage is actually 23.7 percent. In other words, nearly one of every four workers is functionally unemployed in America today — hardly something to celebrate…

[Ludwig similarly analyzes data on wages, inflation, and GDP, finding them similarlly flawed…]

… Take all of these statistical discrepancies together. What we have here is a collection of economic indicators that all point in the same misleading direction. They all shroud the reality faced by middle- and lower-income households. The problem isn’t that some Americans didn’t come out ahead after four years of Bidenomics. Some did. It’s that, for the most part, those living in more modest circumstances have endured at least 20 years of setbacks, and the last four years did not turn things around enough for the lower 60 percent of American income earners.

To be fair, the prevailing indicators aren’t without merit. It is, for example, useful to know how the wages of full-time employees have evolved. The challenge, quite separate from any quibbling with the talented people working to tell the nation’s economic story, is to provide policymakers with a full picture of the reality faced by the bulk of the population. What we need is to find new ways to provide a more realistic picture of the nation’s underlying economic conditions on a monthly basis. The indicators my colleagues and I have constructed could serve as the basis for or inspiration for government-sponsored alternatives. Regardless, something needs to change.

This should not be a partisan issue — policymakers in both parties would benefit from gleaning a more accurate sense of what’s happening at the ground level of the American economy. In reality, both Democrats and Republicans were vulnerable to being snowed in the 2024 cycle — it just happened that the dissatisfaction during this particular cycle undermined the incumbent party.

In an age where faith in institutions of all sorts is in free fall, Americans are perpetually told, per a classic quote from former Sen. Daniel Patrick Moynihan, that while we may be entitled to our own opinions, we aren’t entitled to our own facts. That should be right, at least in the realm of economics. But the reality is that, if the prevailing indicators remain misleading, the facts don’t apply. We have it in our grasp to cut through the mirage that led Democrats astray in 2024. The question now is whether we will correct course…

On the need to revise our economic reference statistics: “Voters Were Right About the Economy. The Data Was Wrong.” from @LISEP_org in @POLITICOMag. Eminently worth reading in full.

More on (and more-current readings of) the suggested “revised metrics” at the Ludwig Institute for Shared Economic Prosperity.

Aaron Levenstein

###

As we muse on measurement and meaning, we might recall that it was on this date in 1979 that The Cars released “Good Times Roll,” the third single from their eponymously-titled debut album.

source

Written by (Roughly) Daily

February 20, 2025 at 1:00 am

“Those who are not shocked when they first come across quantum theory cannot possibly have understood it”*…

Werner Heisenberg, Erwin Schrödinger, and Niels Bohr by Tasnuva Elahi

A scheduling note: your correspondent is headed onto the road for a couple of weeks, so (Roughly) Daily will be a lot more roughly than daily until September 20th or so.

100 years ago, a circle of physicists shook the foundation of science. As Philip Ball explains, it’s still trembling…

In 1926, tensions were running high at the Institute for Theoretical Physics in Copenhagen. The institute was established 10 years earlier by the Danish physicist Niels Bohr, who had shaped it into a hothouse for young collaborators to thrash out a new theory of atoms. In 1925, one of Bohr’s protégés, the brilliant and ambitious German physicist Werner Heisenberg, had produced such a theory. But now everyone was arguing with each other about what it implied for the nature of physical reality itself.

To the Copenhagen group, it appeared reality had come undone…

[Ball tells the story of Niels Bohr’s building on Max Planck, of Werner Heisenberg’s wrangling of Bohr’s thought into theory, of Einstein’s objections and Erwin Schrödinger’s competing theory; then he homes in on the ontological issue at stake…]

Quantum mechanics, they said, demanded we throw away the old reality and replace it with something fuzzier, indistinct, and disturbingly subjective. No longer could scientists suppose that they were objectively probing a pre-existing world. Instead, it seemed that the experimenter’s choices determined what was seen—what, in fact, could be considered real at all.

In other words, the world is not simply sitting there, waiting for us to discover all the facts about it. Heisenberg’s uncertainty principle implied that those facts are determined only once we measure them. If we choose to measure an electron’s speed (more strictly, its momentum) precisely, then this becomes a fact about the world—but at the expense of accepting that there are simply no facts about its position. Or vice versa…

…A century later, scientists are still arguing about this issue of what quantum mechanics means for the nature of reality…

[Ball recounts subsequent attempts to reconcile quantum theory to “reality,” including Schrödinger’s wave mechanics…]

… Schrödinger’s wave mechanics didn’t restore the kind of reality he and Einstein wanted. His theory represented all that could be said about a quantum object in the form of a mathematical expression called the wave function, from which one can predict the outcomes of making measurements on the object. The wave function looks much like a regular wave, like sound waves in air or water waves on the sea. But a wave of what?

At first, Schrödinger supposed that the amplitude of the wave—think of it like the height of a water wave—at a given point in space was a measure of the density of the smeared-out quantum particle there. But Born argued that in fact this amplitude (more precisely, the square of the amplitude) is a measure of the probability that we will find the particle there, if we make a measurement of its position.

This so-called Born rule goes to the heart of what makes quantum mechanics so odd. Classical Newtonian mechanics allows us to calculate the trajectory of an object like a baseball or the moon, so that we can say where it will be at some given time. But Schrödinger’s quantum mechanics doesn’t give us anything equivalent to a trajectory for a quantum particle. Rather, it tells us the chance of getting a particular measurement outcome. It seems to point in the opposite direction of other scientific theories: not toward the entity it describes, but toward our observation of it. What if we don’t make a measurement of the particle at all? Does the wave function still tell us the probability of its being at a given point at a given time? No, it says nothing about that—or more properly, it permits us to say nothing about it. It speaks only to the probabilities of measurement outcomes.

Crucially, this means that what we see depends on what and how we measure. There are situations for which quantum mechanics predicts that we will see one outcome if we measure one way, and a different outcome if we measure the same system in a different way. And this is not, as is sometimes implied (this was the cause of Heisenberg’s row with Bohr), because making a measurement disturbs the object in some physical manner, much as we might very slightly disturb the temperature of a solution in a test-tube by sticking a thermometer into it. Rather, it seems to be a fundamental property of nature that the very fact of acquiring information about it induces a change.

If, then, by reality we mean what we can observe of the world (for how can we meaningfully call something real if it can’t be seen, detected, or even inferred in any way?), it is hard to avoid the conclusion that we play an active role in determining what is real—a situation the American physicist John Archibald Wheeler called the “participatory universe.”..

… Heisenberg’s “uncertainty” captured that sense of the ground shifting. It was not the ideal word—Heisenberg himself originally used the German Ungenauigkeit, meaning something closer to “inexactness,” as well as Unbestimmtheit, which might be translated as “undeterminedness.” It was not that one was uncertain about the situation of a quantum object, but that there was nothing to be certain about.

There was an even more disconcerting implication behind the uncertainty principle. The vagueness of quantum phenomena, when an electron in an atom might seem to jump from one energy state to another at a time of its own choosing, seemed to indicate the demise of causality itself. Things happened in the quantum world, but one could not necessarily adduce a reason why. In his 1927 paper on the uncertainty principle, Heisenberg challenged the idea that causes in nature lead to predictable effects. That seemed to undermine the very foundation of science, and it made the world seem like a lawless, somewhat arbitrary place….

… One of Bohr’s most provocative views was that there is a fundamental distinction between the fuzzy, probabilistic quantum world and the classical world of real objects in real places, where measurements of, say, an electron with a macroscopic instrument tell us that it is here and not there.

What Bohr meant is shocking. Reality, he implied, doesn’t consist of objects located in time and space. It consists of “quantum events,” which are obliged to be self-consistent (in the sense that quantum mechanics can describe them accurately) but not classically consistent with one another. One implication of this, as far as we can currently tell, is that two observers can see different and conflicting outcomes from an event—yet both can be right.

But this rigid distinction between the quantum and classical worlds can’t be sustained today. Scientists can now conduct experiments that probe size scales in between those where quantum and classical rules are thought to apply—neither microscopic (the atomic scale) nor macroscopic (the human scale), but mesoscopic (an intermediate size). We can look, for example, at the behavior of nanoparticles that can be seen and manipulated yet are small enough to be governed by quantum rules. Such experiments confirm the view that there is no abrupt boundary of quantum and classical. Quantum effects can still be observed at these intermediate scales if our devices are sensitive enough, but those effects can be harder to discern as the number of particles in the system increases.

To understand such experiments, it’s not necessary to adopt any particular interpretation of quantum mechanics, but merely to apply the standard theory—encompassed within Schrödinger’s wave mechanics, say—more expansively than Bohr and colleagues did, using it to explore what happens to a quantum object as it interacts with its surrounding environment. In this way, physicists are starting to understand how information gets out of a quantum system and into its environment, and how, as it does so, the fuzziness of quantum probabilities morphs into the sharpness of classical measurement. Thanks to such work, it is beginning to seem that our familiar world is just what quantum mechanics looks like when you are 6 feet tall.

But even if we manage to complete that project of uniting the quantum with the classical, we might end up none the wiser about what manner of stuff—what kind of reality—it all arises from. Perhaps one day another deeper theory will tell us. Or maybe the Copenhagen group was right a hundred years ago that we just have to accept a contingent, provisional reality: a world only half-formed until we decide how it will be…

Eminently worth reading in full: “When Reality Came Undone,” from @philipcball in @NautilusMag.

See also: When We Cease to Understand the World, by Benjamin Labatut.

* Niels Bohr

###

As we wrestle with reality, we might spare a thought for Ludwig Boltzmann; he died on this date in 1906. A physicist and philosopher, he is best remembered for the development of statistical mechanics, and the statistical explanation of the second law of thermodynamics (which connected entropy and probability).

Boltzmann helped paved the way for quantum theory both with his development of statistical mechanics (which is a pillar of modern physics) and with his 1877 suggestion that the energy levels of a physical system could be discrete.

source

“Chance, too, which seems to rush along with slack reins, is bridled and governed by law”*…

And the history of our understanding of those laws is, as Tom Chivers explains (in an excerpt from his book, Everything is Predictable), both fascinating and illuminating…

Traditionally, the story of the study of probability begins in French gambling houses in the mid-seventeenth century. But we can start it earlier than that.

The Italian polymath Gerolamo Cardano had attempted to quantify the maths of dice gambling in the sixteenth century. What, for instance, would the odds be of rolling a six on four rolls of a die, or a double six on twenty-four rolls of a pair of dice?

His working went like this. The probability of rolling a six is one in six, or 1/6, or about 17 percent. Normally, in probability, we don’t give a figure as a percentage, but as a number between zero and one, which we call p. So the probability of rolling a six is p = 0.17. (Actually, 0.1666666… but I’m rounding it off.)

Cardano, reasonably enough, assumed that if you roll the die four times, your probability is four times as high: 4/6, or about 0.67. But if you stop and think about it for a moment, that can’t be right, because it would imply that if you rolled the die six times, your chance of getting a six would be one-sixth times six, or one: that is, certainty. But obviously it’s possible to roll six times and have none of the dice come up six.

What threw Cardano is that the average number of sixes you’ll see on four dice is 0.67. But sometimes you’ll see three, sometimes you’ll see none. The odds of seeing a six (or, separately, at least one six) are different.

In the case of the one die rolled four times, you’d get it badly wrong—the real answer is about 0.52, not 0.67—but you’d still be right to bet, at even odds, on a six coming up. If you used Cardano’s reasoning for the second question, though, about how often you’d see a double six on twenty-four rolls, it would lead you seriously astray in a gambling house. His math would suggest that, since a double six comes up one time in thirty-six (p ≈ 0.03), then rolling the dice twenty-four times would give you twenty-four times that probability, twenty-four in thirty-six or two-thirds (p ≈ 0.67, again).

This time, though, his reasonable but misguided thinking would put you on the wrong side of the bet. The probability of seeing a double six in twenty-four rolls is 0.49, slightly less than half. You’d lose money betting on it. What’s gone wrong?

A century or so later, in 1654, Antoine Gombaud, a gambler and amateur philosopher who called himself the Chevalier de Méré, was interested in the same questions, for obvious professional reasons. He had noticed exactly what we’ve just said: that betting that you’ll see at least one six in four rolls of a die will make you money, whereas betting that you’ll see at least one double six in twenty-four rolls of two dice will not. Gombaud, through simple empirical observation, had got to a much more realistic position than Cardano. But he was confused. Why were the two outcomes different? After all, six is to four as thirty-six is to twenty-four. He recruited a friend, the mathematician Pierre de Carcavi, but together they were unable to work it out. So they asked a mutual friend, the great mathematician Blaise Pascal.

The solution to this problem isn’t actually that complicated. Cardano had got it exactly backward: the idea is not to look at the chances that something would happen by the number of goes you take, but to look at the chances it wouldn’t happen…

… Pascal came up with a cheat. He wasn’t the first to use what we now call Pascal’s triangle—it was known in ancient China, where it is named after the mathematician Yang Hui, and in second-century India. But Pascal was the first to use it in problems of probability.

It starts with 1 at the top, and fills out each layer below with a simple rule: on every row, add the number above and to the left to the number above and to the right. If there is no number in one of those places, treat it as zero…

… Now, if you want to know what the possibility is of seeing exactly Y outcomes, say heads, on those seven flips:

It’s possible that you’ll see no heads at all. But it requires every single coin coming up tails. Of all the possible combinations of heads and tails that could come up, only one—tails on every single coin—gives you seven heads and zero tails.

There are seven combinations that give you one head and six tails. Of the seven coins, one needs to come up heads, but it doesn’t matter which one. There are twenty-one ways of getting two heads. (I won’t enumerate them all here; I’m afraid you’re going to have to trust me, or check.) And thirty-five of getting three.

You see the pattern? 1 7 21 35—it’s row seven of the triangle…

Pascal’s triangle is only one way of working out the probability of seeing some number of outcomes, although it’s a very neat way. In situations where there are two possible outcomes, like flipping a coin, it’s called a “binomial distribution.”

But the point is that when you’re trying to work out how likely something is, what we need to talk about is the number of outcomes— the number of outcomes that result in whatever it is you’re talking about, and the total number of possible outcomes. This was, I think it’s fair to say, the first real formalization of the idea of “probability.”..

On the historical origins of the science of probability and statistics: “Rolling the Dice: What Gambling Can Teach Us About Probability,” from @TomChivers in @lithub.

See also: Against the Gods, by Peter Bernstein.

And for a look at how related concepts shape thinking among quantum physicists, see “The S-Matrix Is the Oracle Physicists Turn to in Times of Crisis.”

* Boethius, The Consolation of Philosophy

###

As we roll the bones, we might send carefully-calculated birthday greetings to a central player in this saga, Abraham de Moivre; he was born on this date in 1667. A mathematician, he’s known for de Moivre’s formula, which links complex numbers and trigonometry, and (more relevantly to the piece above) for his work on the normal distribution and probability theory. de Moivre was the first to postulate the central limit theorem (TLDR: the probability distribution of averages of outcomes of independent observations will closely approximate a normal distribution)– a cornerstone of probability theory. And in his time, his book on probability, The Doctrine of Chances, was prized by gamblers.

source

“We couldn’t build quantum computers unless the universe were quantum and computing… We’re hacking into the universe.”*…

… in the process of which, as Ben Brubaker explains, we learn some fascinating things…

If you want to tile a bathroom floor, square tiles are the simplest option — they fit together without any gaps in a grid pattern that can continue indefinitely. That square grid has a property shared by many other tilings: Shift the whole grid over by a fixed amount, and the resulting pattern is indistinguishable from the original. But to many mathematicians, such “periodic” tilings are boring. If you’ve seen one small patch, you’ve seen it all.

In the 1960s, mathematicians began to study “aperiodic” tile sets with far richer behavior. Perhaps the most famous is a pair of diamond-shaped tiles discovered in the 1970s by the polymathic physicist and future Nobel laureate Roger Penrose. Copies of these two tiles can form infinitely many different patterns that go on forever, called Penrose tilings. Yet no matter how you arrange the tiles, you’ll never get a periodic repeating pattern.

“These are tilings that shouldn’t really exist,” said Nikolas Breuckmann, a physicist at the University of Bristol.

For over half a century, aperiodic tilings have fascinated mathematicians, hobbyists and researchers in many other fields. Now, two physicists have discovered a connection between aperiodic tilings and a seemingly unrelated branch of computer science: the study of how future quantum computers can encode information to shield it from errors. In a paper posted to the preprint server arxiv.org in November, the researchers showed how to transform Penrose tilings into an entirely new type of quantum error-correcting code. They also constructed similar codes based on two other kinds of aperiodic tiling.

At the heart of the correspondence is a simple observation: In both aperiodic tilings and quantum error-correcting codes, learning about a small part of a large system reveals nothing about the system as a whole…

Fascinating: “Never-Repeating Tiles Can Safeguard Quantum Information,” from @benbenbrubaker in @QuantaMagazine.

Plus- bonus background on tiling.

* “We couldn’t build quantum computers unless the universe were quantum and computing. We can build such machines because the universe is storing and processing information in the quantum realm. When we build quantum computers, we’re hijacking that underlying computation in order to make it do things we want: little and/or/not calculations. We’re hacking into the universe.” –Seth Lloyd

###

As we care for qubits, we might send carefully-calculated birthday greetings to Herman Hollerith; he was born on this date in 1860. A statistician and inventor, he was a seminal figure in the development of data processing: he invented (for the 1890 U.S. Census) an electromechanical tabulating machine for punched cards to assist in summarizing information (and, later, for use in accounting). His invention of the punched card tabulating machine, which he patented in 1884, marked the beginning of the era of mechanized binary code and semiautomatic data processing systems– and his approach dominated that landscape for nearly a century.

The company that Hollerith founded to exploit his invention was merged in 1911 with several other companies to form the Computing-Tabulating-Recording Company. In 1924, the company was renamed “International Business Machines” (or, as we know it, IBM).

source