(Roughly) Daily

Posts Tagged ‘computer science

“Many of the things you can count, don’t count. Many of the things you can’t count, really count”*…

Still, we count… and have, as Keith Houston explains, for much, if not most of human history…

Figuring out when humans began to count systematically, with purpose, is not easy. Our first real clues are a handful of curious, carved bones dating from the final few millennia of the three-​million-​year expanse of the Old Stone Age, or Paleolithic era. Those bones are humanity’s first pocket calculators: For the prehistoric humans who carved them, they were mathematical notebooks and counting aids rolled into one. For the anthropologists who unearthed them thousands of years later, they were proof that our ability to count had manifested itself no later than 40,000 years ago.

Counting, fundamentally, is the act of assigning distinct labels to each member of a group of similar things to convey either the size of that group or the position of individual items within it. The first type of counting yields cardinal numbers such as “one,” “two,” and “three”; the second gives ordinals such as “first,” “second,” and “third.”

At first, our hominid ancestors probably did not count very high. Many body parts present themselves in pairs—​arms, hands, eyes, ears, and so on—​thereby leading to an innate familiarity with the concept of a pair and, by extension, the numbers 1 and 2. But when those hominids regarded the wider world, they did not yet find a need to count much higher. One wolf is manageable; two wolves are a challenge; any more than that and time spent counting wolves is better spent making oneself scarce. The result is that the very smallest whole numbers have a special place in human culture, and especially in language. English, for instance, has a host of specialized terms centered around twoness: a brace of pheasants; a team of horses; a yoke of oxen; a pair of, well, anything. An ancient Greek could employ specific plurals to distinguish between groups of one, two, and many friends (ho philosto philo, and hoi philoi). In Latin, the numbers 1 to 4 get special treatment, much as “one” and “two” correspond to “first” and “second,” while “three” and “four” correspond directly with “third” and “fourth.” The Romans extended that special treatment into their day-​to-​day lives: after their first four sons, a Roman family would typically name the rest by number (Quintus, Sextus, Septimus, and so forth), and only the first four months of the early Roman calendar had proper names. Even tally marks, the age-​old “five-​barred gate” used to score card games or track rounds of drinks, speaks of a deep-​seated need to keep things simple.

Counting in the prehistoric world would have been intimately bound to the actual, not the abstract. Some languages still bear traces of this: a speaker of Fijian may say doko to mean “one hundred mulberry bushes,” but also koro to mean “one hundred coconuts.” Germans will talk about a Faden, meaning a length of thread about the same width as an adult’s outstretched arms. The Japanese count different kinds of things in different ways: there are separate sequences of cardinal numbers for books; for other bundles of paper such as magazines and newspapers; for cars, appliances, bicycles, and similar machines; for animals and demons; for long, thin objects such as pencils or rivers; for small, round objects; for people; and more.

Gradually, as our day-​to-​day lives took on more structure and sophistication, so, too, did our ability to count. When farming a herd of livestock, for example, keeping track of the number of one’s sheep or goats was of paramount importance, and as humans divided themselves more rigidly into groups of friends and foes, those who could count allies and enemies had an advantage over those who could not. Number words graduated from being labels for physical objects into abstract concepts that floated around in the mental ether until they were assigned to actual things.

Even so, we still have no real idea how early humans started to count in the first place. Did they gesture? Speak? Gather pebbles in the correct amount? To form an educated guess, anthropologists have turned to those tribes and peoples isolated from the greater body of humanity, whether by accident of geography or deliberate seclusion. The conclusion they reached is simple. We learned to count with our fingers…

From an excerpt from Houston’s new book, Empire of the Sum: The Rise and Reign of the Pocket Calculator: “The Early History of Counting,” @OrkneyDullard in @laphamsquart.

* Albert Einstein

###

As we tally, we might send carefully calculated birthday greetings to Stephen Wolfram; he was born on this date in 1959. A computer scientist, mathematician, physicist, and businessman, he has made contributions to all of these fields. But he is probably best known for his creation of the software system Mathematica (a kind of “idea processor” that allows scientists and technologists to work fluidly in equations, code, and text), which is linked to WolframAlpha (an online answer engine that provides additional data, some of which is kept updated in real time).

source

Written by (Roughly) Daily

August 29, 2023 at 1:00 am

“Life is a Zen koan, that is, an unsolvable riddle. But the contemplation of that riddle – even though it cannot be solved – is, in itself, transformative.”*…

How hard is it to prove that problems are hard to solve? Meta-complexity theorists have been asking questions like this for decades. And as Ben Brubaker explains, a string of recent results has started to deliver answers…

… Even seasoned researchers find understanding in short supply when they confront the central open question in theoretical computer science, known as the P versus NP problem. In essence, that question asks whether many computational problems long considered extremely difficult can actually be solved easily (via a secret shortcut we haven’t discovered yet), or whether, as most researchers suspect, they truly are hard. At stake is nothing less than the nature of what’s knowable.

Despite decades of effort by researchers in the field of computational complexity theory — the study of such questions about the intrinsic difficulty of different problems — a resolution to the P versus NP question has remained elusive. And it’s not even clear where a would-be proof should start.

“There’s no road map,” said Michael Sipser, a veteran complexity theorist at the Massachusetts Institute of Technology who spent years grappling with the problem in the 1980s. “It’s like you’re going into the wilderness.”

It seems that proving that computational problems are hard to solve is itself a hard task. But why is it so hard? And just how hard is it? Carmosino and other researchers in the subfield of meta-complexity reformulate questions like this as computational problems, propelling the field forward by turning the lens of complexity theory back on itself.

“You might think, ‘OK, that’s kind of cool. Maybe the complexity theorists have gone crazy,’” said Rahul Ilango, a graduate student at MIT who has produced some of the most exciting recent results in the field.

By studying these inward-looking questions, researchers have learned that the hardness of proving computational hardness is intimately tied to fundamental questions that may at first seem unrelated. How hard is it to spot hidden patterns in apparently random data? And if truly hard problems do exist, how often are they hard?

“It’s become clear that meta-complexity is close to the heart of things,” said Scott Aaronson, a complexity theorist at the University of Texas, Austin.

This is the story of the long and winding trail that led researchers from the P versus NP problem to meta-complexity. It hasn’t been an easy journey — the path is littered with false turns and roadblocks, and it loops back on itself again and again. Yet for meta-complexity researchers, that journey into an uncharted landscape is its own reward. Start asking seemingly simple questions, said Valentine Kabanets, a complexity theorist at Simon Fraser University in Canada, and “you have no idea where you’re going to go.”…

Complexity theorists are confronting their most puzzling problem yet– complexity theory itself: “Complexity Theory’s 50-Year Journey to the Limits of Knowledge,” from @benbenbrubaker in @QuantaMagazine.

* Tom Robbins

###

As we limn limits, we might send thoroughly cooked birthday greetings to Denis Papin; he was born on this date in 1647. A mathematician and physicist who worked with  Christiaan Huygens and Gottfried Leibniz, Papin is better remembered as the inventor of the steam digester, the forerunner of the pressure cooker and of the steam engine.

source

“Those who can imagine anything, can create the impossible”*…

As Charlie Wood explains, physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors…

… When it comes to conventional machine learning, computer scientists have discovered that bigger is better. Stuffing a neural network with more artificial neurons — nodes that store numerical values — improves its ability to tell a dachshund from a Dalmatian, or to succeed at myriad other pattern recognition tasks. Truly tremendous neural networks can pull off unnervingly human undertakings like composing essays and creating illustrations. With more computational muscle, even grander feats may become possible. This potential has motivated a multitude of efforts to develop more powerful and efficient methods of computation.

[Cornell’s Peter McMahon] and a band of like-minded physicists champion an unorthodox approach: Get the universe to crunch the numbers for us. “Many physical systems can naturally do some computation way more efficiently or faster than a computer can,” McMahon said. He cites wind tunnels: When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.

A wind tunnel is a single-minded machine; it simulates aerodynamics. Researchers like McMahon are after an apparatus that can learn to do anything — a system that can adapt its behavior through trial and error to acquire any new ability, such as classifying handwritten digits or distinguishing one spoken vowel from another. Recent work has shown that physical systems like waves of light, networks of superconductors and branching streams of electrons can all learn.

“We are reinventing not just the hardware,” said Benjamin Scellier, a mathematician at the Swiss Federal Institute of Technology Zurich in Switzerland who helped design a new physical learning algorithm, but “also the whole computing paradigm.”…

Computing at the largest scale? “How to Make the Universe Think for Us,” from @walkingthedot in @QuantaMagazine.

Alan Turing

###

As we think big, we might send well-connected birthday greetings to Leonard Kleinrock; he was born on this date in 1934. A computer scientist, he made several foundational contributions the field, in particular to the theoretical foundations of data communication in computer networking. Perhaps most notably, he was central to the development of ARPANET (which essentially grew up to be the internet); his graduate students at UCLA were instrumental in developing the communication protocols for internetworking that made that possible.

Kleinrock at a meeting of the members of the Internet Hall of Fame

source

“One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”*…

And yet, for centuries no one has succeeded. Now, as Erica Klarreich reports, cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible…

Many computer scientists focus on overcoming hard computational problems. But there’s one area of computer science in which hardness is an asset: cryptography, where you want hard obstacles between your adversaries and your secrets.

Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment.

To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible?

In 1995, Russell Impagliazzo of the University of California, San Diego broke down the question of hardness into a set of sub-questions that computer scientists could tackle one piece at a time. To summarize the state of knowledge in this area, he described five possible worlds — fancifully named Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania — with ascending levels of hardness and cryptographic possibility. Any of these could be the world we live in…

Explore each of them– and their implications for secure encryption– at “Which Computational Universe Do We Live In?” from @EricaKlarreich in @QuantaMagazine.

Charles Babbage

###

As we contemplate codes, we might we might send communicative birthday greetings to a frequentlyfeatured hero of your correspondent, Claude Elwood Shannon; he was born on this date in 1916.  A mathematician, electrical engineer– and cryptographer– he is known as “the father of information theory.”  But he is also remembered for his contributions to digital circuit design theory and for his cryptanalysis work during World War II, both as a codebreaker and as a designer of secure communications systems.

220px-ClaudeShannon_MFO3807

 source

“Information was found to be everywhere”*…

A newly-proposed experiment could confirm the fifth state of matter in the universe—and change physics as we know it…

Physicist Dr. Melvin Vopson has already published research suggesting that information has mass and that all elementary particles, the smallest known building blocks of the universe, store information about themselves, similar to the way humans have DNA.

Now, he has designed an experiment—which if proved correct—means he will have discovered that information is the fifth form of matter, alongside solid, liquid, gas and plasma…

Dr. Vopson said: “This would be a eureka moment because it would change physics as we know it and expand our understanding of the universe. But it wouldn’t conflict with any of the existing laws of physics. It doesn’t contradict quantum mechanics, electrodynamics, thermodynamics or classical mechanics. All it does is complement physics with something new and incredibly exciting.”

Dr. Vopson’s previous research suggests that information is the fundamental building block of the universe and has physical mass. He even claims that information could be the elusive dark matter that makes up almost a third of the universe…

Is information is a key element of everything in the universe? “New experiment could confirm the fifth state of matter in the universe.”

* James Gleick, The Information: A History, a Theory, a Flood

###

As we go deep, we might send thoroughly-modeled birthday greetings to Stanislaw Ulam; he was born on this date in 1909. A mathematician and nuclear physicist, he originated the Teller–Ulam design of thermonuclear weapons, discovered the concept of the cellular automaton, and suggested nuclear pulse propulsion.

But his most impactful contribution may have been his creation of the the Monte Carlo method of computation. While playing solitaire during his recovery from surgery, Ulam had thought about playing hundreds of games to estimate statistically the probability of a successful outcome. With ENIAC in mind, he realized that the availability of computers made such statistical methods very practical, and in 1949, he and Nicholas Metropolis published the first unclassified paper on the Monte Carlo method… which is now widely used in virtually every scientific field, in engineering and computer science, finance and business, and the law.

source

%d bloggers like this: